WO2014019701A1 - Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment - Google Patents
Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment Download PDFInfo
- Publication number
- WO2014019701A1 WO2014019701A1 PCT/EP2013/002302 EP2013002302W WO2014019701A1 WO 2014019701 A1 WO2014019701 A1 WO 2014019701A1 EP 2013002302 W EP2013002302 W EP 2013002302W WO 2014019701 A1 WO2014019701 A1 WO 2014019701A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- sequence
- rank
- outbound
- handlers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
Definitions
- the present invention relates generally to data and information processing for communication systems, and more particularly to a method, an apparatus and a system for processing asynchronous messages of a sequence in a distributed and parallel processing environment.
- the transmission of messages may be either synchronous or asynchronous.
- the messages are distributed and multicast with full recipient isolation wherein each multicast message is processed independently from each other.
- Figure 1 shows a synchronous transmission of messages or service calls between two systems, on one side a calling system 110 and on the other side a remote system 120, wherein the calling system 110 controls the order of the message processing.
- the calling system 110 is waiting for the result of remote processing; as a consequence the caller is the master regarding the order in which messages are actually processed on a server system or the remote system.
- a transmission 111 of a first message A from the calling system 110 is processed in the remote system 120 and followed by a message A processed 121 returned to the calling system 110.
- the calling system 110 can start a transmission 113 of a second message B to the remote system 120.
- the second message B is then processed in the remote system 120 and a message B processed 123 is returned to the calling system 110.
- the chronological processing of the synchronous calls or messages between the calling system 110 and the remote system 120 shows that the process 112 of the first message A by the server system or the remote system 20 occurs before the process 14 of the second message B.
- Figure 2 shows an asynchronous transmission of messages or service calls between a calling system 210 and a server system or a remote system 220, wherein the calling system 210 sends a service call or message to the server system or a remote system 220 which will then process the message based on its own scheduling.
- the client system or the calling system 210 is loosing control of the timing of the message processing.
- a transmission 211 of a first message A from the calling system 2 0 is processed in the remote system 220.
- the calling system 210 has started a transmission 213 of a second message B to the remote system 220.
- the second message B is then processed in the remote system 220 and it cannot be determined whether a message B processed is returned to the calling system 210 before a message A processed.
- the chronological processing of the asynchronous calls or messages between the calling system 210 and the remote system 220 shows that the process 212 of the first message A by the server system or the remote system 220 occurs more or less at the same time as the process 214 of the second message B. It would also be possible that the second message B is processed before the first message A, which could severely impact the relevancy of the sequence containing messages A and B.
- Figure 3 is an exemplary flow diagram showing a parallel processing of service calls or messages in a distributed system.
- service calls or messages are processed in parallel by instantiation and / or in multithreading.
- Instances 1 , 2, 3, and n, referred as 310-1 , 310-2, ... and 310-n of the process system are processing four messages 1 , 2, 3 and 4 in the message queue 340 with an inbound sequence.
- message 1 is processed in an instance 2 and message 2 is processed in an instance 3
- message 3 is processed in an instance 1
- message 4 is processed in instance 3 as is message 2.
- the sequence refers to the order in which the service calls or messages are to be conveyed and/or processed by the distributed system. This order is generally driven by the business process or an industry standard. By not respecting this order, the outcome results in inadequate processing and in the worst case in irreversible corruption of the stored functional data, also called database corruption.
- Figure 4 is an exemplary flow diagram showing a parallel processing of asynchronous calls or messages in a distributed process system which results in a risk of de-ordering the processing of messages and corrupted data.
- the sequencing is ensured by the emitter system or the calling system which initiates the messages to the remote system one after the other, controlling de facto the sequence flow between correlated messages.
- the present invention aims to mitigate the aforementioned problem and to avoid any irreversible corruption of the stored functional data, or any database corruption.
- the invention provides a computer-implemented method of sequencing distributed asynchronous messages in a distributed and parallel system having a plurality of inbound handlers forming an inbound handlers layer and a plurality of outbound handlers forming an outbound handlers layer, the method comprising the following steps performed with at least one data processor:
- the distributed and parallel system can be seen as a router including: inbound handlers which are arranged in parallel to each other in an inbound handlers layer receiving messages pertaining to many sequences; a storage layer comprising a shared sequence storage, a queue storage and a shared overflow storage and being configured to receive the messages from the inbound handlers and to store them in a memory.
- the sequence storage may comprise the overflow storage.
- the distributed and parallel system further includes outbound handlers which are arranged in parallel to each other in an outbound handlers layer and which are adapted to retrieve the messages from the shared queue storage for processing while the system ensures the correct sequencing of the messages within their respective sequence.
- the outbound handlers are configured to receive messages, to process them and to possibly deliver them to the correct recipient.
- the overflow storage and the sequence storage are shared by the inbound handlers and by the outbound handlers. They are used in common by the parallel inbound handlers and by the parallel outbound handlers.
- the invention therefore provides a solution for maintaining the order of messages pertaining to a same sequence while allowing parallel processing of various sequences in a distributed environment.
- decoupling the inbound handlers from the outbound handlers allows isolating the throughput of the emitters from the throughput of the recipients.
- the number of inbound handlers and the number of outbound handlers is highly and independently scalable.
- the invention avoids creating an affinity between a sequence and an in/outbound handler, allowing thereby any in/outbound handler to handle a message of any sequence.
- the invention offers a strong resiliency, since the outage of some handlers or outbound handlers does not affect the processing of the messages.
- the method according to the invention may also comprise any one of the following additional features and steps:
- the step of determining if the incoming message is the next message to be processed for maintaining the order of messages in said sequence comprises:
- the message is determined to be the next message to be processed for maintaining the order of the messages in said sequence,
- message rank a message rank number which is referred to in the present description as message rank.
- the message rank defines the order of messages within a sequence.
- a message may be provided with a message rank by the originator of the message or a third party.
- a message rank may also be assigned by the system according to the arrival order of the messages of a sequence.
- a sequence rank in the understanding of the present description defines which message of a given sequence is to be processed next, i.e. which message rank the next message to be processed must have.
- the sequence rank is indicated in the sequence storage.
- processing a message at an outbound handler means that the outbound handler sends or delivers the message to a recipient.
- the sequence rank of said given sequence is incremented.
- the method comprises checking if the overflow storage comprises a message with a message rank that is equal to the sequence rank as incremented and subsequently forwarding this message to the queue storage.
- the step of determining a message rank comprises assigning to the incoming message a message rank indicating the rank of the incoming message in its sequence and storing the assigned message rank in the sequence storage.
- the assigned message rank corresponds to the rank of the last message received at any one of the inbound handlers for said sequence plus one increment.
- the message rank is 1.
- the message rank assigned to the newly incoming message is N+1.
- the incoming message as received in the inbound handler is provided with an index indicating the message rank within the sequence.
- the status of the sequence is set to "pending".
- "pending" means that the overflow storage area contains at least a message for the given sequence, but that this or these messages have a message rank that is not equal to the sequence rank.
- sequence status is set to "waiting" when none of the outbound handler is currently processing a message for said sequence and when no message for that sequence is in the overflow storage area.
- sequence status is set to "processing" when at least one of the outbound handlers is currently processing a message of said sequence.
- the queue storage does not comprise any message for the sequence of the incoming message and if the message rank of the incoming message is greater than the sequence rank indicated in the sequence storage, then the incoming message is stored in the overflow storage until the sequence rank is incremented and equals the message rank of the incoming message.
- the message was provided with a message rank by the originator of the message or a third party, and if the message rank is greater than the sequence rank then the message is stored in the overflow storage.
- the sequence rank will be incremented until it reaches the message rank of the message previously stored. This message can then be released from the sequence storage or more precisely from the overflow storage and can be sent to the queue storage once the queue storage and the inbound handlers are not storing and processing a message of this sequence.
- the outbound handlers operate asynchronously, allowing thereby an outbound handler to send a message and then to be available for another processing upon sending the message and before receiving an acknowledgment of response from a recipient of the message.
- an outbound handler comprises a delivery process that sends messages to recipients and an acknowledgment process that receives acknowledgment of receipt from the recipients.
- the delivery process and the acknowledgment operate independently, allowing thereby a delivery process to be available immediately upon sending a message.
- the method upon receiving the incoming message and before the checking step, comprises performing an inbound locking step wherein all inbound handlers are prevented to receive another message of said sequence until the incoming message is stored in the sequence storage or is sent to the queue storage.
- an incoming message can be accepted at an inbound handler while another message for the same sequence is being sent or processed by an outbound handler.
- the only limited cases for which an incoming message needs to wait for release of the locking are:
- an outbound handler receives a reply, i.,e., an acknowledgment, from a recipient, it locks the sequence and the corresponding rank, time to seek next message to be sent in said sequence if any and to increment the rank.
- the inbound locking step comprises locking a mutex dedicated to said sequence, said mutex being stored in the sequence storage.
- the inbound handler upon receiving an incoming message, the inbound handler checks the sequence correlation value of the sequence of said incoming message and reads the mutex parameter for said sequence before accepting the incoming message. The inbound handler accepts the incoming message if the mutex is not locked. If the mutex is locked, the incoming message waits for the release of the mutex.
- the mutex is stored in a sequence register comprised in the sequence storage.
- the storage queue in the queue storage ensures that for a given sequence only one message is propagated to an outbound handler until the outbound handler layer has completed the processing of the message for that sequence.
- the outbound locking step comprises locking a mutex dedicated to said sequence, said mutex being stored in the sequence storage.
- an outbound handler when available, it checks in the queue storage if a message is available for processing, then it retrieves said message and processes it.
- an outbound handler when an outbound handler is available, it checks in the queue storage 850 if there is an available message to process. If there is a message, then this message is automatically the correct message to be processed for said given sequence.
- the inbound handler upon storage of the incoming message in the sequence storage or more precisely in the overflow storage, the inbound handler sends an acknowledgment message.
- the acknowledgment message is sent to an originator of the message.
- a message having a message rank greater than the sequence rank is stored in the overflow storage to lock the message sequence in the overflow storage, as long as its message rank is not matching the sequence rank, i.e., the rank of the next message to be processed.
- a message having a message rank greater than the sequence rank is first stored in the overflow storage and is then discarded from the overflow storage after a time out value assigned to the sequence of the message is reached.
- a message having a message rank greater than the sequence rank is first stored in the overflow storage and is then discarded from the overflow storage after a time out value assigned to the message is reached.
- the invention relates to a non-transitory computer- readable medium that contains software program instructions, where execution of the software program instructions by at least one data processor results in performance of operations that comprise execution of the method according to the invention.
- the invention relates to a distributed and parallel processing system for sequencing asynchronous messages comprising:
- each of the plurality of inbound handlers being configured to receive independently a plurality of incoming messages pertaining to various sequences;
- each of the plurality of outbound handlers being configured to process and forward independently the plurality of incoming messages
- a storage layer comprising at least a memory and comprising:
- ⁇ a queue storage for storing incoming messages ready to be transmitted to the plurality of outbound handlers
- a sequence storage comprising: a sequence status context (802) for maintaining and updating a status of sequences of the incoming messages; and an overflow storage configured to receive the messages from the inbound handlers and to sequentially forward them to the queue storage,
- system being also configured to determine if an incoming message is the next message to be processed for maintaining the order of the messages in the sequence of this message and to perform the following steps performed with at least one data processor;
- sequence status indicates that none of the outbound handler is currently processing a message for said sequence and if the incoming message is determined to be the next message to be processed for said sequence, then forwarding the incoming message to the queue storage and subsequently forwarding it to an available outbound handler for processing;
- sequence status indicates that at least one of the outbound handlers is currently processing a message of said sequence; or if the queue storage already comprises a message to be processed for said sequence; or if the incoming message is determined not to be the next message to be processed for said sequence, then storing the incoming message in the overflow storage to keep for further processing.
- the queue storage and the sequence storage of the storage layer are implemented in an in-memory data or in a file base storage.
- the queue storage and the sequence storage of the storage layer are implemented in a client-server storage database.
- checking the sequence status comprises retrieving the status of a sequence based on the sequence correlation value of said sequence.
- the invention relates to a computer-implemented travel monitoring method for processing asynchronous messages between at least one server application and at least one client application in a parallel environment having a plurality of parallel inbound handlers and a plurality of parallel outbound handlers, the method comprising the following steps performed with at least one data processor:
- sequence status indicates that none of the outbound handler out of the plurality of parallel outbound handlers is currently processing a message for said sequence and if the incoming message is determined to be the next message to be processed for said sequence, then forwarding the incoming message to a queue storage and subsequently forwarding it to an available outbound handler for processing;
- sequence status indicates that at least one of the outbound handlers is currently processing a message of said sequence; or if the queue storage already comprises a message to be processed for said sequence; or if the incoming message is determined not to be the next message to be processed for said sequence, then storing the incoming message in an overflow storage to keep for further processing, wherein the messages comprise data related to passengers and the sequence correlation value contain data related to references of a transportation service.
- the method according to the invention may also comprise any one of the following additional features and steps.
- the messages are forwarded from the outbound handlers to at least one of: a travel reservation and booking system, an inventory system of an airline, an electronic ticket system of an airline, a departure control system of an airport, the operational system of an airport, the operational system of an airline, the operational system of a ground handler.
- the references of a transportation service comprise at least one of the following: a flight number, a date and a class reservation.
- the messages are indicative of any one of: boarding passengers, cancelled passengers, added passengers.
- a sequence time out value is provided for each incoming message in order to remove the incoming message stored in the overflow storage after a sequence time out value is reached, the sequence time out value being triggered by the departure time of a flight or being any one of: an expiration of a flight offer or an expiration of a promotion.
- the invention relates to a non-transitory computer- readable medium that contains software program instructions, where execution of the software program instructions by at least one data processor results in performance of operations that comprise execution of the above method according to the invention.
- the invention relates to a computer-implemented method of sequencing distributed asynchronous messages in a distributed and parallel system having a plurality of inbound handlers and a plurality of outbound handlers comprising at least one processor to process the messages, the method comprising the following steps performed with at least one data processor:
- sequence status indicates that none of the outbound handler is currently processing a message for said sequence and if:
- the incoming message as received is not provided with any index indicating the message rank within the sequence and the sequence storage does not already comprise any message to be processed for said sequence, or if
- the incoming message as received is provided with an index indicating the message rank within the sequence, said message rank being equal to a sequence rank indicated in the sequence storage and defining the rank of the next message to be processed for said sequence,
- Figure 1 A is an exemplary flow diagram showing a chronological processing of synchronous calls or messages between a calling system and a remote system.
- Figure 2 is an exemplary flow diagram showing a chronological processing of asynchronous calls or messages between a calling system and a remote system.
- Figure 3 is an exemplary flow diagram showing a parallel processing of calls or messages in a distributed process system.
- Figure 4 is an exemplary flow diagram showing a parallel processing of asynchronous calls or messages in a distributed process system which results in a risk of de-ordering the processing of messages and corrupted data.
- Figure 5 shows an exemplary block diagram of a high level sequence management in a centralized and shared sequence context according to the present invention.
- Figure 6 is an exemplary flow diagram of the process for identifying sequences within a transmission and processing channel according to the present invention.
- Figure 7 shows an exemplary of asynchronous and distributed processing system according the present invention.
- Figure 8A is an exemplary step of a sequencing process wherein an inbound handler receives a first message on sequence A according to the present invention.
- Figure 8B is another exemplary step of a sequencing process wherein an inbound handler receives a second message on sequence A according to the present invention.
- Figure 8C is another exemplary step of a sequencing process wherein an outbound handler processes a first message on sequence A according to the present invention.
- Figure 8D is another exemplary step of a sequencing process wherein an outbound handler has processed a first message on sequence A according to the present invention.
- Figure 8E is an exemplary step of a sequencing process wherein a sequence is re-arranged according to the present invention.
- the processing order of a message is defined in an asynchronous and parallel environment by the emitter of the message or the calling system, either explicitly by providing an index indicating the rank of each message within the sequence, or implicitly by delivering messages sequentially and awaiting a transport acknowledgement of a given message before sending the next message in the given sequence.
- the present invention aims to ensure that concurrent and independent processes respect the sequencing order for processing a given set of messages defined as a sequence.
- Each message or service call belonging to a given sequence is tagged, by interface definition, to actually refer to the specific sequence it belongs to.
- the rank of a message or service call within a sequence of messages is either:
- the message comprises a field including an index that defines the rank of the message within its sequence or • Implicitly by using the sequential order in which the messages or service call in the sequence are received overtime.
- Figure 5 shows an exemplary block diagram of a high level sequence management in a centralized and shared sequence context.
- the main features of the system and the main steps are shown in detail.
- the asynchronous and distributed processing system comprises an inbound handler 510 receiving incoming messages 501 , an outbound handler 530 configured to process messages and deliver them.
- the system also comprises an overflow storage area 540 that possibly stores the messages received from the inbound handler if the processing of the message 501 must be withhold to maintain the order of the sequence to which the message belongs.
- system comprises a plurality of parallel inbound handlers 510 and a plurality of parallel outbound handlers 530 and figure 5 is simplified to facilitate understanding.
- the inbound handler can also be referred to as an acceptor or an acceptor process.
- the inbound handler layer comprising a plurality of inbound handlers can also be referred to as an acceptor layer.
- the outbound handler can also be referred as a processor or a delivery process.
- outbound handler layer comprising a plurality of outbound handlers can also be referred to as a delivery layer.
- Inbound handler 510 is in particular configured to perform any one of: receive messages from emitters such as publishers; validate integrity of messages; perform the sequence locking and status validation; store the message in one of the two areas, (i.e., queue storage or overflow area); reply to the emitter.
- the outbound handler 530 is composed of two processes.
- a first process referred to as the delivery process 531 and which is configured to perform any one of: get a message from storage queue; send it to the recipient via the communication channel; quit to be available for other processes.
- a second process referred to as the acknowledgement process 532 and which is configured to: receive from the recipient an acknowledgement of reception; perform the sequence management as to put next message in corresponding sequence, if any, in storage queue; quits to be available for other processes.
- the delivery layer formed of outbound handlers 532 is asynchronous which allows complying with high scalability requirement. This way, the system is independent of the latency of the recipient. More precisely, it means that an outbound handler can retrieve and deliver a message of a first sequence and can then retrieve and deliver another message of a second sequence before it receives an acknowledgement for the delivery of the message for the first sequence. Therefore, an outbound handler can asynchronously handle messages from many sequences, increasing thereby the number of messages that the system can route while always maintaining the correct order for each sequence.
- a central and shared sequence context is implemented, wherein a state machine is used for each sequence.
- a state machine is used for each sequence.
- a corresponding sequence context status is checked 520.
- the corresponding sequence context does not exist, it is created dynamically and transparently.
- the invention does not require sequences to be defined in advance in the system, but is fully dynamical in this respect.
- a message rank is assigned to the message according to the arrival order of the message.
- sequence status indicates that the outbound handler layer is waiting for the next message of the sequence, i.e., the sequence status is "Waiting": then the incoming message 501 is processed 522 normally according to the standard behaviour by an outbound handler 530 (the message will be available for the asynchronous processing); or
- sequence status indicates that a message of the sequence is already currently being processed, i.e., the sequence status is "Processing": then the incoming message 501 is stored in a specific sequence overflow storage area 540 in order to be processed later 524.
- the overflow storage area 540 is structured / indexed in such a way that the order of the incoming message is not lost. In this way, the incoming message 501 is kept for further processing and it is not available for immediate processing (as out of sequence).
- the outbound handler layer receives the messages to be processed according to the standard behaviour, wherein the messages are de facto, in the correct sequence rank. Once a message 501 is processed, the outbound handler layer looks for the next message to process in the sequence in the overflow storage area 540. If such a message is found, it is pushed to the outbound handler layer, according to the standard process. If no message is found, then the sequence status is set back to "Waiting".
- the order of each message within the sequence is maintained.
- the sequence storage defines a sequence rank indicating the rank of the next message that must be processed to preserve the order of the message of a sequence.
- the sequence rank is incrementally updated each time the processing of a message is complete. The sequence rank can therefore be seen as a counter.
- the present invention provides a dynamic way to leave an indicator in the context of the sequence, to take action on a sequence when it is considered expired, such as to discard the expired message or expired sequence.
- MOMs Some Message Oriented Middleware, referred as MOMs, provide a sequencing feature by avoiding the use of parallelism (i.e. they enforce only one de-queuing consumer). Therefore, they provide sequence guarantee at the expense of scalability.
- Oracle® Advanced Queuing is a typical example.
- MOMs for instance MQ-Series
- MQ-Series do provide a sequencing feature based on a correlator, but they require the sequence messages to be processed as if they were logically grouped together. Besides, the group size has to be limited and the MOM may require additional constraints on the de-queuing process.
- the distributed and parallel processing according to the present invention provides strict sequencing while keeping parallelism and scalability, and without requiring particular constraints in the way messages or services calls are correlated, or processed by the de-queuing process.
- the high scalability and resilience of the method, apparatus and system of the present invention enables: • To implement a fully asynchronized delivery process using the "post and quit" principles in which the message is posted and the process does not wait for an acknowledgement, another process (the acknowledgment process) being in charge of receiving the acknowledgement, which allow coping with a very high message throughput; and
- the trivial approach to cope with message sequencing may be to revert to a mono process architecture which raises huge and sometimes unacceptable constraints in terms of resilience and scalability.
- the present invention allows the full benefit of distributed and parallel processing at two levels, the inbound handlers level and the outbound handlers level while ensuring sequencing provided the cardinality of the sequences is high. That means that the invention takes full advantage of parallelizing sequence processing only if the system has to cope with a high number of sequences in parallel.
- the queue storage itself can be local to the node whereas the overflow storage area remains global, i.e., shared by all the outbound handlers.
- the overflow storage needs to be shared because any of the nodes can process a given sequence, therefore, they should have access to the unique overflow area to actually queue and de-queue in this storage.
- a queue storage is not shared by all outbound handlers, it can either be dedicated to a single outbound handler or to a plurality of outbound handlers. In these cases where the queue storage is not shared by all outbound handles, each message is received by only one local queue storage. • The storage of rejected messages is easier as there is no need for an exception queue since the message can sit in the overflow area with an altered status.
- the message sequence is processed in a distributed and parallel mode by performing the identification of the sequence and the identification of the message rank in the sequence.
- the sequence is to be managed and rearranged including the sequence locks and the time outs.
- FIG. 6 shows an exemplary flow diagram of the process for identifying sequences within a transmission and processing channel 620 between an emitting system 610 and a process system 630.
- a dedicated parameter is provided to each messaging primitive involved in a given transmission.
- This dedicated parameter is a sequence identifier also referred to as a sequence correlation value. It is typically an alphanumeric value set by the emitting system of the message.
- This parameter is used by each involved component to actually identify messages belonging to the same sequence. For instance, messages 1 to 4 are parsed in the transmission channel 620 and identified as messages #1 , .... #4. Although these correlated and ordered messages are sharing the same transmission channel 620, they are not consecutively following each other. They are intertwined in the transmission channel 620 with messages belonging to another sequence.
- the sequence correlation parameter is defined in a way to ensure it is not shared by distinct colliding processes on a given transmission and processing chain. In this context, it is mandatory to have a strict uniqueness. Preferably, this definition of the sequence correlation parameter is the responsibility of the business process using the system.
- Messages which require to be processed in a specific order can be categorized in two kinds: • The first kind of messages for which the order or rank within the sequence is known at the time of the message creation, and preferably for which the total number of messages is also known at the time of the message creation; wherein the generating process is capable of assigning to each message a specific message rank number in the transmission primitive. This message rank number is then conveyed and stored by each process as part of an overall chain until the final processing is performed; and
- the second kind of messages for which the ordering within the sequence is determined at the time of generation is generally incremental, meaning that each new message (or event) in the process alters the results of the process of the previous message in the sequence.
- the emitting system of a message knows neither the message rank number of a message in a given sequence, nor the total number of messages within a sequence.
- the message rank number of a message in a given sequence is referred to as the message rank.
- an exemplary of asynchronous and distributed processing system comprises:
- a plurality of Inbound handlers 710, 720, 740 forming an inbound handlers layer.
- the inbound handlers receive incoming messages 711 , 721 , 731 and 741 , store them respectively in a queue storage 750 and possibly acknowledge good reception of these incoming messages 711 , 721 , 731 and 741 to the emitting system;
- a plurality of Outbound handlers 760, 770, 790 forming an outbound handlers layer.
- Each outbound handler is configured to retrieve messages from the queue storage 750 and to process them.
- Outbound handlers are also in charge of forwarding the processed messages to applications.
- the inbound handlers can also be referred as acceptors or an acceptor processes.
- the outbound handlers can also be referred to as processors or delivery processes.
- Figure 8A shows an improvement of the embodiment of Figure 7 illustrating exemplary steps of a sequencing process of a first message of sequence A received by an inbound handler from the emitting system.
- an additional component referred to as sequence storage 800, is implemented as part of the storage layer between a plurality of inbound handlers 810, 820, 840 and a plurality of outbound handlers 860, 870, 890.
- the sequence storage 800 comprises:
- a centralized or common and shared sequence Status context 802 also referred to as Status, to maintain a shared sequence status between all processes; a sequence being uniquely identified by its sequence correlation value.
- the Status also enables to determine for each event on a given sequence the behaviour to apply:
- the overflow storage is a storage for indexed and ordered sequenced messages that are not ready to be delivered in regards to the current sequence status.
- client-server storage database(s) if the distributed processes run on several nodes wherein the server system is a remote system.
- the maximum consistency between the storage layer and standard message storage can be obtained by implementing both in a common RDBMS engine sharing a unique transaction.
- a queue storage 850 allows the message exchange between the plurality of inbound handlers and the plurality of outbound handlers, and operates independently from the sequence storage 800 ;
- the overflow storage 806 of the sequence storage 800 ensures the sequencing of the message exchange between the plurality of inbound handlers 810, 820 840 and the plurality of outbound handlers 860,
- Figure 8A illustrates the sequencing process of an incoming message of the present invention
- the inbound handler 810 locks 812 the mutex 804 of sequence "A” preventing any inbound handler or outbound handler to handle another message with the sequence correlation value "A";
- the inbound handler 810 checks 814 the central status context 802 of sequence "A”: wherein either the sequence does not exist or the sequence is in "Waiting" status;
- the inbound handler 810 sets 814 the status context 802 of sequence "A" to "Processing".
- the invention assigns to the incoming message a message rank that is equal to the rank of the previously received message plus one increment. Since the incoming message is the first one for this sequence, the message rank assigned to the message is set to 1.
- the message rank is preferably stored in the sequence storage 800 and more precisely in the sequence context 802.
- the inbound handler 810 preferably stores 816 the message 851 in the queue storage 850; and
- the inbound handler 810 acknowledges to the message emitter, releases the mutex 804 of sequence "A" and is ready to receive any other incoming message.
- Figure 8B illustrates the next steps of the sequencing process where another incoming message is received at the system:
- a second message, 801-2 belonging to the sequence "A” is received by an inbound handler 820; • The inbound handler 820 locks 822 the mutex 804 of sequence "A” preventing any inbound handler 810, 830 or 840 or outbound handler to handle another message within sequence "A"
- the inbound handler 820 checks 824 the central status context 802 of sequence "A" where the sequence status is "Processing". Since the message cannot be made available to any outbound handlers, the inbound handler 820 stores 826 the message 807 in the overflow storage 806.
- a message rank corresponding to the message rank of the previously incoming message plus one increment is assigned to the incoming message.
- the message rank assigned to the incoming message is 2.
- the invention increments a sequence rank that defines the rank of the next message to be processed for that sequence.
- sequence rank defines the rank of the next message to be processed for that sequence.
- the correct message is the one having a message rank corresponding to the sequence rank as defined in the sequence.
- the inbound handler 820 acknowledges to the message emitter, releases the mutex of sequence "A" and is ready to receive any other incoming message.
- Figure 8C illustrates the next steps of the sequencing process where the message stored in the queue of the queue storage is dequeued to one of the outbound handlers for processing:
- One of the outbound handlers 870 retrieves 871 the message 851 of sequence "A" from the queue storage 850. Thanks to the invention, this message is automatically the next message of the sequence that must be processed. Therefore its rank is the rank of the last message that has been processed plus one increment. In this exemplary embodiment, since the message stored in the queue storage is the first one of the sequence A, then its rank is necessarily "1";
- the outbound handler 870 delivers 873 the message with rank "1" to a relevant recipient or to another routing means before a further delivery to the recipient. Once the message has been sent at step 873, the outbound handler 870 is available for another processing. It can still operate while it has not yet received the acknowledgment of receipt from the recipient. For instance, the outbound handler 870 can retrieve and send another message that pertains to another sequence, achieving thereby an asynchronous delivery for enhancing the throughput. It can also receive an acknowledgment of receipt from any emitter and for any message. Therefore, the number of messages and processings that the outbound handler can execute is not limited by the response time of the emitter of the message that was sent at step 873.
- the recipient receives the message sent at step 873 from the delivery process 8701 of the outbound handler 870. In response, the recipient sends an acknowledgment message to the system.
- the acknowledgment process 8702 of the same outbound handler 870 or the acknowledgment process 8602 of another outbound handler 860 receives the acknowledgment message. This corresponds to step 874 depicted in figure 8C.
- Figure 8D illustrates the next steps of the sequencing process where the next message stored in the overflow storage area 806 is forwarded to the queue storage 850 before being forwarded to one of the outbound handlers for processing.
- the steps depicted on figure 8D are triggered by the reception 875 of the acknowledgment message at an inbound handler of the system.
- the outbound handler 860 checks 862 in the status context 802 the sequence rank to determine the rank of the next message that must be processed, within the sequence of the message being acknowledged. Since the sequence rank is set to "2", the outbound handler 860 retrieves 809 the message 807 of sequence "A" having a message rank "2" from the Overflow storage area 806. This message 807 will then be stored in queue storage 850, and will be made available to all outbound handlers. 201
- the status context 802 remains "processing".
- the sequence rank is incremented and is set to "3", thereby indicating that the next message to be processed is the one having a message rank equal to "3";
- the outbound handler 860 exits and is ready to process another message stored in the queue storage 850.
- the invention increments a sequence rank that defines the rank of the next message to be processed for that sequence.
- the sequence rank is checked. Only the message with a message rank equal to the sequence rank is forwarded to the queue storage 850. if there is no message in the overflow area 806 of the sequence storage 800 having a message rank that is equal to the sequence rank, then the processing of this sequence is withhold until a message with the correct rank is received from an inbound handler.
- the sequence rank operates like a counter that indicates the message that must be processed.
- the sequence rank is stored in the sequence storage 800.
- Figure 8E illustrates the steps of the sequencing process of an incoming message 801 with the re-arrangement of the sequence:
- additional steps may occur for the re-arrangement driven by an index order:
- the rank of the message as indicated by the index of the message is compared to the rank of the next message to process in order to maintain the sequence order.
- This rank of the next message to process in order to maintain the sequence order is indicated by the sequence rank that is incrementally updated, preferably in the sequence storage.
- the incoming message 815 is stored in the queue storage 850, to be available to the plurality outbound handlers.
- the sequence status is set to processing.
- the sequence rank to process is incremented and the previously described process applies.
- the message 813 is stored in the overflow storage 806.
- the sequence status 802 is set to "Pending”. The message will de facto not be processed.
- the sequencing will resume when a message with the expected message rank to be processed is received by an inbound handler. In that case the inbound handler will store the corresponding message in the queue storage and set sequence status to "Processing".
- an outbound handler when an outbound handler ends up working on a message, it will seek in the overflow storage 806, a message with a message rank matching the sequence rank (the sequence rank indicating the rank of the next message to be processed for maintaining the order of the sequence). If it is found, it is stored in the queue storage 850; if not, the sequence status is set either to "Pending” (if messages on this sequence exist in the overflow area, but with a rank not equal to the sequence rank) or "Waiting" (if no message for that sequence are in the overflow storage area).
- the messages received for a given sequence are stored in overflow storage 806, as long as their message ranks are not matching the one to be processed. This is a lock situation for the whole sequence, as long as the next expected message to be processed is not received by an inbound handler.
- the present invention ensures that this lock situation is limited in time, if ever it is required by the process.
- the process also defines a global sequence time out value, expressed as a duration (in seconds, minutes, days,).
- the sequence context 802 may contain an absolute time value, defined as the sequence timeout.
- a time out sequence collector may be implemented for regularly waking up and scanning the full list of sequence contexts.
- any sequence that has expired with regard to its sequence duration is detected. This process makes use of the sequence time out values to achieve the selection.
- the method, apparatus and system according to the present invention can:
- a Teletype message is usually referred to as TTY.
- a TTY type B is an airline industry standard for exchanging messages via asynchronous channels with strict order of processing for a given functional context. For instance, a first message contains a list of boarding passengers, a second list contains a list of suppressed passengers, a third list contains a list of added passengers. These lists of passengers have to follow in a strict order.
- OTF OTF high-level framework
- Embodiments of the various techniques described herein may be implemented in digital electronic circuitry, in computer hardware, or handheld electronic device hardware, firmware, software, or in combinations of them.
- Embodiments may be implemented as a program or software product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, a tablet or multiple computers.
- a program such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
- a program can be deployed to be executed on one computer or tablet or on multiple computers or tablets at one site or distributed across multiple sites and interconnected by a communication network or a wireless network.
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer, tablet or electronic device.
- a processor will receive instructions and data from a read-only memory or a random access memory or both.
- Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data.
- a computer or electronic device also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- Embodiments may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components.
- Components may be interconnected by any form or medium of digital data communication, e.g., a communication network, a wireless network or a telecommunication network.
- Examples of communication or telecommunication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet or a wireless network such as a Wifi network.
- LAN local area network
- WAN wide area network
- Wifi network wireless network
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Multi Processors (AREA)
- Information Transfer Between Computers (AREA)
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201380036210.XA CN104428754B (zh) | 2012-08-02 | 2013-08-01 | 在分布式并行环境中对异步消息排序的方法、系统和计算机程序产品 |
| KR1020157002898A KR101612682B1 (ko) | 2012-08-02 | 2013-08-01 | 분산 및 병렬 환경에서 비동기 메시지를 시퀀싱하는 방법, 시스템 및 컴퓨터 프로그램 제품 |
| JP2015524671A JP6198825B2 (ja) | 2012-08-02 | 2013-08-01 | 分散並列環境における非同期メッセージのシーケンシングの方法、システム、およびコンピュータプログラム製品 |
| IN10080DEN2014 IN2014DN10080A (enExample) | 2012-08-02 | 2013-08-01 |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP12368017.5 | 2012-08-02 | ||
| US13/565,284 US8903767B2 (en) | 2012-08-02 | 2012-08-02 | Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment |
| EP12368017.5A EP2693337B1 (en) | 2012-08-02 | 2012-08-02 | Method, system and computer program products for sequencing asynchronous messages in a distributed and parallel environment |
| US13/565,284 | 2012-08-02 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014019701A1 true WO2014019701A1 (en) | 2014-02-06 |
Family
ID=48953356
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2013/002302 Ceased WO2014019701A1 (en) | 2012-08-02 | 2013-08-01 | Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment |
Country Status (5)
| Country | Link |
|---|---|
| JP (1) | JP6198825B2 (enExample) |
| KR (1) | KR101612682B1 (enExample) |
| CN (1) | CN104428754B (enExample) |
| IN (1) | IN2014DN10080A (enExample) |
| WO (1) | WO2014019701A1 (enExample) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115567364A (zh) * | 2022-08-26 | 2023-01-03 | 交控科技股份有限公司 | 基于轨道交通分布式调度系统的告警管理装置 |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107093138B (zh) * | 2017-04-21 | 2019-04-30 | 山东佳联电子商务有限公司 | 基于分布式无阻塞异步消息处理模式的拍卖竞价系统及其运行方法 |
| CN110709820B (zh) * | 2017-06-08 | 2023-08-25 | 艾玛迪斯简易股份公司 | 多标准消息处理 |
| EP3419250B1 (en) * | 2017-06-23 | 2020-03-04 | Vestel Elektronik Sanayi ve Ticaret A.S. | Methods and apparatus for distributing publish-subscribe messages |
| CN110865891B (zh) * | 2019-09-29 | 2024-04-12 | 深圳市华力特电气有限公司 | 一种异步消息编排方法和装置 |
| CN111045839A (zh) * | 2019-12-04 | 2020-04-21 | 中国建设银行股份有限公司 | 分布式环境下基于两阶段事务消息的顺序调用方法及装置 |
| CN111506430B (zh) * | 2020-04-23 | 2024-04-19 | 上海数禾信息科技有限公司 | 多任务下数据处理的方法、装置及电子设备 |
| CN111562888B (zh) * | 2020-05-14 | 2023-06-23 | 上海兆芯集成电路有限公司 | 存储器自更新的调度方法 |
| CN114625546B (zh) * | 2020-12-11 | 2025-06-13 | 银联数据服务有限公司 | 一种数据处理方法及装置 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5588117A (en) * | 1994-05-23 | 1996-12-24 | Hewlett-Packard Company | Sender-selective send/receive order processing on a per message basis |
| WO2003071435A1 (en) * | 2002-02-15 | 2003-08-28 | Proquent Systems Corporation | Management of message queues |
| EP2254046A1 (en) * | 2009-05-18 | 2010-11-24 | Amadeus S.A.S. | A method and system for managing the order of messages |
| WO2012051366A2 (en) * | 2010-10-15 | 2012-04-19 | Attivio, Inc. | Ordered processing of groups of messages |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH09101901A (ja) * | 1995-10-06 | 1997-04-15 | N T T Data Tsushin Kk | マルチプロセスで動作するパーソナルコンピュータ上で行われるプロセス間のメッセージ通信方式及びメッセージ通信方法 |
| US7698267B2 (en) * | 2004-08-27 | 2010-04-13 | The Regents Of The University Of California | Searching digital information and databases |
| GB0613195D0 (en) * | 2006-07-01 | 2006-08-09 | Ibm | Methods, apparatus and computer programs for managing persistence in a messaging network |
| WO2008105099A1 (ja) * | 2007-02-28 | 2008-09-04 | Fujitsu Limited | アプリケーション連携制御プログラム、アプリケーション連携制御方法およびアプリケーション連携制御装置 |
| US8392925B2 (en) * | 2009-03-26 | 2013-03-05 | Apple Inc. | Synchronization mechanisms based on counters |
-
2013
- 2013-08-01 JP JP2015524671A patent/JP6198825B2/ja active Active
- 2013-08-01 CN CN201380036210.XA patent/CN104428754B/zh active Active
- 2013-08-01 KR KR1020157002898A patent/KR101612682B1/ko active Active
- 2013-08-01 IN IN10080DEN2014 patent/IN2014DN10080A/en unknown
- 2013-08-01 WO PCT/EP2013/002302 patent/WO2014019701A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5588117A (en) * | 1994-05-23 | 1996-12-24 | Hewlett-Packard Company | Sender-selective send/receive order processing on a per message basis |
| WO2003071435A1 (en) * | 2002-02-15 | 2003-08-28 | Proquent Systems Corporation | Management of message queues |
| EP2254046A1 (en) * | 2009-05-18 | 2010-11-24 | Amadeus S.A.S. | A method and system for managing the order of messages |
| WO2012051366A2 (en) * | 2010-10-15 | 2012-04-19 | Attivio, Inc. | Ordered processing of groups of messages |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115567364A (zh) * | 2022-08-26 | 2023-01-03 | 交控科技股份有限公司 | 基于轨道交通分布式调度系统的告警管理装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104428754A (zh) | 2015-03-18 |
| JP2015527658A (ja) | 2015-09-17 |
| IN2014DN10080A (enExample) | 2015-08-21 |
| JP6198825B2 (ja) | 2017-09-20 |
| KR20150037980A (ko) | 2015-04-08 |
| CN104428754B (zh) | 2018-04-06 |
| KR101612682B1 (ko) | 2016-04-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8903767B2 (en) | Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment | |
| WO2014019701A1 (en) | Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment | |
| CN102414663B (zh) | 用于管理报文排序的方法和系统 | |
| US8903925B2 (en) | Scheduled messages in a scalable messaging system | |
| US8578218B2 (en) | Method and system for implementing a scalable, high-performance, fault-tolerant locking mechanism in a multi-process environment | |
| JP5026506B2 (ja) | ポリシーベースのメッセージ集約フレームワーク | |
| EP2693337B1 (en) | Method, system and computer program products for sequencing asynchronous messages in a distributed and parallel environment | |
| US9448861B2 (en) | Concurrent processing of multiple received messages while releasing such messages in an original message order with abort policy roll back | |
| US8661083B2 (en) | Method and system for implementing sequence start and increment values for a resequencer | |
| TWI345176B (en) | Method for scheduling event transactions | |
| CN116382943A (zh) | 顺序消息处理方法、总线系统、计算机设备及存储介质 | |
| US9124448B2 (en) | Method and system for implementing a best efforts resequencer | |
| US8661454B2 (en) | System and method for receiving and transmitting event information | |
| CN111782373B (zh) | 作业调度方法及装置 | |
| EP3513292B1 (en) | Multi-standard message processing | |
| CN107949856B (zh) | 电子邮件停放区 | |
| US20100254388A1 (en) | Method and system for applying expressions on message payloads for a resequencer | |
| Bernstein et al. | Queued transaction processing | |
| CN114625494A (zh) | 任务处理方法、装置、电子设备及计算机可读存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13747794 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2015524671 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 20157002898 Country of ref document: KR Kind code of ref document: A |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13747794 Country of ref document: EP Kind code of ref document: A1 |