CN104428754B - To method, system and the computer program product of asynchronous message sequence in distributed parallel environment - Google Patents

To method, system and the computer program product of asynchronous message sequence in distributed parallel environment Download PDF

Info

Publication number
CN104428754B
CN104428754B CN201380036210.XA CN201380036210A CN104428754B CN 104428754 B CN104428754 B CN 104428754B CN 201380036210 A CN201380036210 A CN 201380036210A CN 104428754 B CN104428754 B CN 104428754B
Authority
CN
China
Prior art keywords
message
sequence
station processor
order
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201380036210.XA
Other languages
Chinese (zh)
Other versions
CN104428754A (en
Inventor
N·科雷森斯基
C·塞维来科
D·斯培兹阿
P·多尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
This Simple And Easy Joint-Stock Co Of Emma's Enlightening
Original Assignee
This Simple And Easy Joint-Stock Co Of Emma's Enlightening
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/565,284 external-priority patent/US8903767B2/en
Priority claimed from EP12368017.5A external-priority patent/EP2693337B1/en
Application filed by This Simple And Easy Joint-Stock Co Of Emma's Enlightening filed Critical This Simple And Easy Joint-Stock Co Of Emma's Enlightening
Publication of CN104428754A publication Critical patent/CN104428754A/en
Application granted granted Critical
Publication of CN104428754B publication Critical patent/CN104428754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present invention provides one kind in distributed parallel system, the system and computer implemented method to be sorted to the asynchronous message of distribution, the distributed parallel system have formed inbound processor layer it is multiple enter station processor, with formed outbound processor layer it is multiple go out station processor, methods described includes the following steps using the progress of at least one data processor:It is the multiple enter station processor in arbitrarily enter station processor, receive input message, the input message has the sequence correlation of sequence of the identification comprising the input message, checks the sequence state of the sequence in shared sequence memory;It is determined that whether input message is to maintain the order of the message in the sequence and next message to be processed;If sequence state indicates the message for going out the current all untreated sequence of station processor in station processor layer, and if input message is confirmed as next pending message of the sequence, input message is so transmitted to shared queue's memory, then by outbound processor layer it is available go out station processor fetch the message, to handle;If sequence state indicate in station processor layer it is at least one go out station processor handling the message of the sequence at present;Or if shared queue's memory has included next pending message of the sequence;Or if input message is determined not to be next pending message of the sequence, then input message is stored in the internal memory of shared overflow storage, remains further to handle.

Description

To method, system and the computer of asynchronous message sequence in distributed parallel environment Program product
Technical field
The present invention relates to the data of communication system and information processing, more particularly, in distributed variable-frequencypump environment In, handle the method, apparatus and system of the asynchronous message of sequence.
Background technology
In service call or event handling, by using distributed software architecture, the transmission of message be it is synchronous or Person is asynchronous.Message is distribution and multicast in the case where recipient is completely isolated, wherein every multicast message is independent of one another Ground is processed.
Fig. 1 represents two systems, the message between the calling system 110 in side and the remote system 120 in opposite side Or the order of the synchronous transfer of service call, the wherein processing of the control message of calling system 110.In this case, calling system 110 Wait the result of remote processing;Thus, caller be on server system or remote system actual treatment message it is suitable The control person of sequence.
The transmission 111 of first message A from calling system 110 in remote system 120 be processed, post processing after Message A 121 is returned to calling system 110.Message A after processing is received, calling system 110 can start to long-range Second message B of system 120 transmission 113.Second message B is then processed in remote system 120, the message B after processing 123 are returned to calling system 110.
In the exemplary flow chart, synchronization call between calling system 110 and remote system 120 or at the sequential of message Reason show the processing 112 of server system or remote system 120 to first message A occur the second message B processing 114 it Before.
Fig. 2 represents the asynchronous of message between calling system 210 and server system or remote system 220 or service call Transmission, wherein calling system 210 send service call or message to server system or remote system 220, server system or Remote system 220 then by the arrangement of time according to its own, handles message.FTP client FTP or the forfeiture pair of calling system 210 The control of the timing of Message Processing.
The transmission 211 of first message A from calling system 210 is processed in remote system 220.Meanwhile call system System 210 has started to the transmission 213 of from the second message B to remote system 220.Second message B is then located in remote system 220 Reason, it is impossible to before determining whether message A after treatment, the message B after processing is returned to calling system 210.
In the exemplary flow chart, asynchronous call between calling system 210 and remote system 220 or at the sequential of message Reason show the processing 212 of server system or remote system 220 to first message A more or less and the second message B processing 214 Occur simultaneously.It is also possible that second message B is processed before first message A, and this can be had a strong impact on comprising message A and B The association of sequence.
Fig. 3 is the exemplary flow chart for the parallel processing for representing service call or message in distributed system.In distribution In system, in order in accordance with compliance and scalability requirement, using illustrating and/or with multithreading, parallel processing service call or Message.In figure 3, processing system be referred to as 310-1,310-2 ... and 310-n example 1,2,3 ... and n is according to inbound sequence Message 1,2,3 and 4 in column processing message queue 340.
Parallel processing does not ensure to handle and complete the order of continuous service call or message.However, service call or disappearing Breath processing performs sometimes for the strength of the sequence between correlating event or message.
In figure 3 in shown example, message 1 is processed in example 2, and message 2 is processed in example 3, and message 3 exists Be processed in example 1, message 4 with message 2 as in example 3 it is processed.Because the processing of message is not coordinated, therefore message 2 are processed at first, followed by message 1, are message 4 afterwards, are finally message 3.This is uncoordinated affairs sequential processes.
In figure 3, sequence refers to distributed system transmission and/or the order of processing service call or message.The order Typically driven by operation flow or professional standard.If do not observe the order, then result can cause to handle it is insufficient, the worst In the case of, the irreversible breaking of function data preserved, also referred to as DBASE Destrong can be caused.
Fig. 4 is the exemplary flow chart for the parallel processing for representing asynchronous call or message in distributed processing system(DPS), described Parallel processing causes the risk for making the processing of message out-of-sequence and the data destroyed.
In synchronous environment, sort from one after the other to remote system initiate message, so as in fact control association messages it Between sequence flows sender's system or calling system ensure.
When sender's system or calling system have to contend with asynchronous distributed treatment, this sequence becomes impossible, Because the end of the processing of the message in remote system 420 is not can determine that.Fig. 4 illustrates this risk, wherein being from calling The transmission 411 of system 410 to the first message A of remote system 420 is followed by the second message B transmission 413.Second message B place Reason 414 starts before the processing 412 of first message A processing 414.Then, Message Processing is reversed, and causes processing not filled Point, in the worst case, cause the irreversible breaking of function data preserved, in other words DBASE Destrong.
Then, present invention aims at mitigation above mentioned problem, and any irreversible breaking of function data preserved is avoided, Any DBASE Destrong in other words.
The content of the invention
In one embodiment, the present invention provides a kind of computer implemented in distributed parallel system, to distribution Asynchronous message sequence method, the system have formed inbound processor layer it is multiple enter station processor (inbound Handler), and formed outbound processor layer it is multiple go out station processor (outbound handler), methods described includes profit The following steps carried out with least one data processor (data processor):
It is the multiple enter station processor in arbitrarily enter station processor in, receive input message, it is described input message tool There is the sequence correlation of sequence of the identification comprising the input message,
Check the sequence state of the sequence in shared sequence memory;
It is determined that whether input message is to maintain the order of the message in the sequence and next message to be processed;
If sequence state indicates the message for going out the current all untreated sequence of station processor in station processor layer, And if input message is confirmed as next pending message of the sequence, then input message is transmitted to shared team Row memory, then by it is available go out station processor fetch the message, to handle;
If sequence state indicate in station processor layer it is at least one go out station processor handling the sequence at present Message;Or if shared queue's memory has included the pending message of the sequence;Or if input message It is determined not to be next pending message of the sequence, then input message is stored in shared overflow storage In internal memory, remain further to handle.
So as to which distributed parallel system is seen as router, and the router includes:Inbound is arranged in concurrently with each other In processor layer, receive the message related to many sequences enter station processor;Stored comprising shared sequence memory, queue Device and shared overflow storage, and be configured to receive from the message for entering station processor and the message is stored in internal memory (memory) accumulation layer in.Sequence memory can include overflow storage.Distributed parallel system is also included concurrently with each other Be arranged in outbound processor layer and be suitable for system ensure message its each in sequence when being correctly ordered, from Message is fetched in shared queue memory, goes out station processor so as to processing.Go out station processor to be configured to receive message, place Manage message and correct recipient may be given message dilivery.
Overflow storage and sequence memory are shared by entering station processor and going out station processor.They are by parallel inbound Reason device and the parallel station processor that goes out share.
Then, while the present invention provides a kind of parallel processing of each sequence in allowing distributed environment, maintain The solution of the order of each message related to identical sequence.In addition, make into station processor and go out station processor decoupling and permit Perhaps the handling capacity of sender and the handling capacity of recipient are isolated.In addition, enter the number of station processor and go out the number of station processor It is that height is independent expansible.In addition, the present invention avoids producing the compatibility between sequence and Inbound/Outbound processor, so as to Allow the message of any Inbound/Outbound processor processing arbitrary sequence.So as to which strong compliance is presented in the present invention, because at some Reason device goes out the shutdown of station processor and does not interfere with the processing of message.
Method according to the present invention also includes the supplementary features and step below any one:
In one embodiment, it is determined that whether input message is to locate to maintain the order of the message in the sequence The step of next message of reason, includes:
- determine that instruction inputs the message order (rank) of order of the message in the sequence,
- compare message sum of ranks define the sequence next pending message order sequence order,
If-message order is equal to sequence order, then the message is determined as to maintain the message in the sequence Order and next message to be processed,
If-message order is more than sequence order, then the message is determined not to be to maintain disappearing in the sequence The order of breath and next message to be processed.
Comprising in the sequence and need according to particular order handle message can have message ordinal number, in the present note, Message ordinal number is referred to as message order.The order of message in message order defined nucleotide sequence.Message can be by the promoter of message or Tripartite provides message order.Message order can also be assigned by system according to the order of arrival of the message of sequence.
In the present note, next sequence order definition will handle which of given sequence message, i.e. next pending Which message order is message must have.
Preferably, sequencing column rank is referred in sequence memory.
Typically, mean out that station processor sends message or is delivered to recipient going out station processor processing message.
Advantageously, when going out the processing for the message that station processor completes given sequence, the sequence of the given sequence Order is incremented by.
Preferably, when the sequence order of sequence is incremented by, methods described, which includes, checks whether overflow storage includes message order The message equal with incremental sequence order, the message is then transmitted to queue memory.
According to advantageous embodiment, if the input message received does not possess any rope of message order of the instruction in sequence To draw, then the step of determining message order includes the message order of the order to input message dispatch instruction input message in its sequence, And the message order of assignment is stored in sequence memory.
Preferably, the message order of assignment corresponds to is entering one of any stop press received of station processor for the sequence Order add an increment.So as to if input message is a piece of news of the sequence, then message order is 1.If entering The message order for the first message that station processor receives is N, then the message order for being dispatched to new input message is N+1.
In another advantageous embodiment, the input message received in station processor is entered possesses disappearing in indicator sequence Cease the index of order.
Preferably, if message order is more than sequence order, then the state of sequence is set to " co-pending ".So as to meaning " co-pending " Taste overflows at least one message that memory block includes given sequence, but this or these message has disappearing not equal to sequence order Cease order.
Typically, when going out the station processor message of all untreated sequence at present, and the message of the sequence is worked as all Not when overflowing in memory block, sequence state is set to " wait ".Typically, handling at present when going out at least one station processor During the message of the sequence, sequence state is set to " in processing ".
Advantageously, if queue memory does not include any message of the sequence of input message, and if input disappears The message order of breath is more than the sequence order indicated in sequence memory, then input message is stored in overflow storage, directly It is incremented by sequence order, and untill the message order equal to input message.
So as to which if message provides message order by the promoter of message or third party, and if message order is more than sequence Order, then the message is stored in overflow storage.When other message with relatively low message order will be processed, then Sequence order will be incremented by, untill it reaches the message order for the message being previously saved.Once queue memory and inbound processing When device does not preserve and handled the message of the sequence, so that it may from sequence memory, or more accurately, from overflow storage release Message is stated, and can be transmitted to queue memory.
This is equally applicable to not possess message order but system is according to the message that its order of arrival is its assignment message order. Advantageously, when message is gone out station processor to be successfully processed, then the message is removed from queue memory.
Advantageously, outbound processor asynchronous work, so as to allow station processor to send message, then described in transmission During message and before the response confirmation of the recipient from the message is received, available for carrying out another processing.
According to advantageous embodiment, go out station processor and include the delivery process for message being sent to recipient, and receive and From the confirmation process for receiving confirmation of recipient.Delivery process and confirmation process work independently, so as to allow delivery process sending out It is immediately available when sending message.
Advantageously, when receiving input message and before checking step, methods described includes carrying out inbound locking step Suddenly, wherein prevent it is all enter station processor receive the another a piece of news of the sequence, deposited until input message is stored in sequence In reservoir, or untill being sent to queue memory.
Advantageously, when going out station processor transmission or handling mutually homotactic another a piece of news, station processor can entered Receive input message.Input message such as needs to be only at the limited case of releasing to be locked:
- another input message is just stored in accumulation layer, or is being entered and received in station processor,
- go out station processor and receiving and handling response of the recipient to the message of the sequence.Received when going out station processor To the answer from recipient, i.e. during confirmation, its lock sequence and corresponding order, find to be sent next in the sequence Bar message (if any), and it is incremented by order.
Preferably, inbound lock step includes the mutexes (mutex) that locking is exclusively used in the sequence, and the mutexes are protected Exist in sequence memory.
Preferably, when receiving input message, the sequence correlation that station processor checks the sequence of the input message is entered, and Before receiving to input message, the mutexes parameter of the sequence is read.If mutexes are not locked out, then enter station processor Receive input message.If mutexes are locked, then the release of input Messages-Waiting mutexes.
More accurately, mutexes are stored among the sequential register being contained in sequence memory.
Advantageously for entering station processor and to go out station processor, each sequence only has a mutexes.Queue memory In storage queue ensure that an only message is propagated to out station processor, until outbound processor layer for given sequence Untill the processing for having completed the message of the sequence.
Preferably, outbound lock step includes the mutexes that locking is exclusively used in the sequence, and the mutexes are stored in sequence In row memory.
Advantageously, when go out station processor can use when, it checks whether message is available for handling in queue memory, then It fetches the message and handles the message.
Preferably, when go out station processor can use when, it checked in queue memory 850 is available for processing disappear Breath.If there is message, then the message is the pending correct message of the given sequence automatically.
In one embodiment, it is stored in when input message in sequence memory, or more accurately, is stored in spilling When in memory, enter station processor and send confirmation message.
Typically, confirmation message is sent to the promoter of message.
Advantageously, message order is stored in overflow storage more than the message of sequence order, so that message sequence is locked In overflow storage, as long as its message sum of ranks sequence order, i.e. the order of next pending message is not consistent.
Preferably, message order is more than the message of sequence order and is stored in first in overflow storage, is then dispatched to reaching After the timeout value of the sequence of message, the message is abandoned from overflow storage.Alternatively or additionally, message order is more than sequence The message of column rank is stored in overflow storage first, then after reaching and being dispatched to the timeout value of the message, from spilling The message is abandoned in memory.
In another embodiment, the present invention relates to a kind of computer-readable Jie of the non-transitory comprising software program instructions Matter, wherein the execution for the software program instructions for passing through at least one data processor causes holding comprising the method according to the present invention The progress of capable various operations.
In another embodiment, the present invention relates to a kind of distributed variable-frequencypump system to asynchronous message sequence, institute The system of stating includes:
- comprising at least one data processor it is multiple enter station processor, it is the multiple enter station processor in each quilt It is configured to the independently received multiple input message related to each sequence;
- comprising at least one data processor it is multiple go out station processor, it is the multiple go out station processor in each quilt It is configured to be independently processed from and forwards the multiple input message;With
- accumulation layer of at least one internal memory is included, the accumulation layer includes:
Queue memory, for preserve to send to it is multiple go out station processor input message;
Sequence memory, comprising:For the sequence state context for the state for maintaining and updating the sequence for inputting message (802);Be configured to receive from the message for entering station processor and sequentially forward them to the spilling of queue memory Memory,
The system is further configured to determine whether input message is to maintain the suitable of the message in the sequence of the message Sequence and next message to be processed, and carry out the following steps carried out using at least one data processor;
If sequence state indicates the message of the current all untreated sequence of station processor, and if input message It is determined to be next pending message of the sequence, then input message is transmitted to queue memory, then by can Go out station processor and fetch the message, to handle;
If sequence state indicates that at least one outbound processing routine currently handles the message of the sequence;Or such as Fruit queue memory has included the pending message of the sequence;Or if input message is determined not being under the sequence One pending message, then input message is stored in overflow storage, remains further to handle.
According to optional embodiment, in internal storage data, or in file storehouse memorizer, realize that the queue of accumulation layer is deposited Reservoir and sequence memory.On the other hand, in client-server data storage storehouse, the queue memory of accumulation layer is realized And sequence memory.
Preferably, check that sequence state includes the sequence correlation according to the sequence, fetch the state of sequence.
In another embodiment, the present invention relates to a kind of computer implemented travelling monitoring method, methods described to be used for In with multiple parallel parallel environments for entering station processor and multiple parallel outbound processing routines, at least one service Device is applied handles asynchronous message between at least one client application, and methods described is using at least one data processor The following steps of progress:
- parallel enter entering in station processor the multiple and receive input message, the input message in station processor The sequence correlation of the sequence comprising the input message with identification,
The sequence state of the sequence in-inspection sequence memory;
- determine whether input message is to be processed next to disappear to maintain the order of the message in the sequence Breath;
If in the multiple parallel outbound processing routines of sequence state instruction to go out station processor all untreated described at present The message of sequence, and if input message is confirmed as next pending message of the sequence, then input message Be transmitted to queue memory, then forward it to it is available go out station processor, to handle;
If sequence state instruction it is at least one go out station processor currently handle the message of the sequence;Or if Queue memory has included the pending message of the sequence;Or if input message is determined not being the next of the sequence Individual pending message, then input message is stored in overflow storage, remains further to handle,
Wherein message package contains the data related to passenger, and sequence correlation includes the reference with transportation service (reference) related data.
Method according to the present invention also includes the following supplementary features and step of any one.
Once it is processed, message should from go out station processor be forwarded to it is at least one following:Travelling reservation and seat reservation system, The stock system of airline, the E-ticket system of airline, the takeoff control system on airport, the operating system on airport, The operating system of airline, the operating system of floor control person.
In one embodiment, the reference of transportation service is comprising at least one following:Flight number, date and freight space are subscribed.
In one embodiment, message represents following one of any:The passenger of boarding, the passenger that flight is cancelled, Increased passenger.
In one embodiment, sequence timeout value is provided for each input message, so as to after sequence timeout value is reached The input message being stored in overflow storage is removed, sequence timeout value is triggered by the departure time of flight, or following One of meaning:The deadline of flight quotation, or the deadline of promotion.
In another embodiment, the present invention relates to a kind of computer-readable Jie of the non-transitory comprising software program instructions Matter, wherein the execution for the software program instructions for passing through at least one data processor causes comprising the above method according to the present invention Execution various operations progress.
In another embodiment, the present invention relates to it is a kind of it is computer implemented in distributed parallel system to distribution Asynchronous message sequence method, the distributed parallel system have it is multiple enter station processor, and comprising processing message at least One processor it is multiple go out station processor, methods described includes the following steps using the progress of at least one data processor:
- input message is received in station processor is entered, the input message has the sequence that identification includes the input message The sequence correlation of row, and determine the message order of order of the instruction input message in the sequence;
The sequence state of the sequence in-inspection sequence memory;
If sequence state indicate station processor at present all untreated sequence message, and if:
The input message of reception does not possess any index of message order of the instruction in sequence, and sequence memory is also not Any pending message comprising the sequence, or if
The input message of reception possesses the index of message order of the instruction in sequence, and the message order is equal to be stored in sequence The sequence order of the order of next pending message being indicated in device, defining the sequence,
Input message is so transmitted to queue memory, then forward it to it is available go out station processor, to locate Reason;
If sequence state instruction it is at least one go out station processor currently handle the message of the sequence;Or if Queue memory has included the pending message of the sequence;Or if the input message received possesses instruction in sequence The index of message order, the message order be more than it is being indicated in sequence memory, define the next of the sequence and pending disappear The sequence order of the order of breath, then input message is stored in overflow storage, remains further to handle.
Brief description of the drawings
When read in conjunction with the accompanying drawings, in following embodiment, above and other side of embodiments of the invention Face will become apparent.
Figure 1A is the illustration flow for the sequential processing for representing synchronization call or message between calling system and remote system Figure.
Fig. 2 is the illustration flow for the sequential processing for representing asynchronous call or message between calling system and remote system Figure.
Fig. 3 is the exemplary flow chart for the parallel processing for representing calling or message in distributed processing system(DPS).
Fig. 4 is the exemplary flow chart for the parallel processing for representing asynchronous call or message in distributed processing system(DPS), described Parallel processing causes the risk for making the processing of message out-of-sequence and the data destroyed.
Fig. 5 is the illustration block diagram that the advanced sequence management in sequence context is shared according to the centralization of the present invention.
Fig. 6 is the exemplary flow chart according to the processing of the sequence in the identification transmission of the present invention and treatment channel.
Fig. 7 represents the asynchronous distributed processing system of the illustration according to the present invention.
Fig. 8 A are the examples handled according to the sequence of a piece of news wherein entered on station processor receiving sequence A of the present invention Demonstrate,prove step.
Fig. 8 B are the another of processing of sorting according to the Article 2 message wherein entered on station processor receiving sequence A of the present invention One exemplary steps.
Fig. 8 C are the another of processing of sorting according to a piece of news wherein gone out on station processor processing sequence A of the present invention One exemplary steps.
Fig. 8 D are handled according to the sequence of a piece of news wherein gone out on the processed sequence A of station processor of the present invention Another exemplary steps.
Fig. 8 E are the exemplary steps that the sequence being rearranged according to the wherein sequence of the present invention is handled.
Embodiment
Although in the context applied to tourist industry, following explanation is given, but following explanation does not represent limitation Property example because the present invention be applied to all kinds data processing, and such as accommodation, automobile leasing, train ticket it The travel products of class.
According to the present invention, in asynchronous parallel environment, the sender of message either calling system or by providing instruction The index of the order of each bar message in sequence, clearly defines the processing sequence of message, or by order deliver message and Before sending the lower a piece of news in given sequence, the transmission of given message is waited to confirm that the processing for impliedly defining message is suitable Sequence.
Present invention aims at ensure that concurrent and independent process is observed to handle one group of given message for being defined as sequence Clooating sequence.
In this, the method, apparatus and system to be sorted according to the asynchronous message to distribution of the present invention carry out following letter Illustrate, and afterwards by refer to the attached drawing, the various actions illustrated in further detail.
Defined using interface, mark belongs to each message or service call of given sequence, to be actually related to belonging to it In particular sequence.
The order of message or service call in message sequence:
Or clearly provided by appropriate interface by sender/transmitter of message or service call.For example, disappear Breath includes the field for the index for defining order of the message in its sequence,
Or by using the message or the consecutive order of service call in receiving sequence over time, imply Ground provides.
Once identify sequence and message order or service call order, so that it may suitably supervisory sequence.
Fig. 5 represents the illustration block diagram of the advanced sequence management in the shared sequence context of centralization.In Figure 5, in detail Illustrate the principal character and key step of system.
Asynchronous distributed processing system enters station processor 510 including reception input message 501, is configured to handle message simultaneously That delivers the message goes out station processor 530.If the system also processing including message 501 must be stopped, to maintain this The order of sequence belonging to message, then can preserve from the spilling memory block 540 for the message for entering station processor reception.
It should be appreciated that system include it is multiple parallel enter station processor 510, and it is multiple parallel go out station processor 530, Fig. 5 is simple Change, in order to understand.
Enter station processor and be also referred to as recipient or recipient's process.So as to, comprising it is multiple enter station processor enter Station processor layer is also referred to as recipient's layer.
Go out station processor to be also referred to as processor or deliver process.So as to, comprising it is multiple go out station processor it is outbound Processor layer also referred to as delivers layer.
Specifically, enter station processor 510 be configured to carry out it is following one of any:Receive from such as publisher's etc The message of sender;Verify the integrality of message;Carry out sequence locking and state verification;Message is stored in two regions (i.e., Queue memory or overflow area) one of in;Reply sender.
According to advantageous embodiment, go out station processor 530 and be made up of two processes.Referred to as deliver process 531 first Process is configured to carry out following one of any:Message is obtained from storage queue;By communication port, message is sent to receiving Person;Exit, in order to for other processing.
Referred to as confirm that second process of process 532 is configured to:The reception from recipient is received to confirm;Carry out sequence Management, so as to which the lower a piece of news in corresponding sequence is put into storage queue (if any);Exit, in order to for it It is handled.
So as to which, the delivery layer formed by going out station processor 532 is asynchronous, this allows to defer to enhanced scalability will Ask.So, system is unrelated with the stand-by period of recipient.More accurately, this means out that station processor can be fetched and deliver The message of one sequence, then before the delivery confirmation that it receives the message of first sequence, its retrieval is simultaneously delivered The another a piece of news of second sequence.Then, go out station processor can message of the asynchronous process from multiple sequences, so as to all the time While maintaining the correct order of each sequence, the number for the message that increase system can be route.
According to the present invention, the shared sequence context of concentration is realized, wherein state machine is used for each sequence.Whenever in inbound When input message 501 is received in processor 510, sequence context state (520) corresponding to inspection.According to embodiment, if right The sequence context answered is not present, then dynamic and pellucidly sequence context corresponding to establishment.So as to which the present invention does not require Sequence is pre-defined in systems, is conversely completely dynamic in this respect.Indicate it in sequence in addition, if message does not possess The index of interior order, then according to the order of arrival of message, to message dispatch message order.
If sequence state indicates that station processor layer is waiting the lower a piece of news of sequence, i.e. sequence state is " wait ":So go out station processor 530 according to criterion behavior, properly processing input message 501 (522), (message will be available In asynchronous process);Or
If sequence state instruction is at present in the message of processing sequence, i.e. sequence state is " in processing ":So Input message 501 is stored in particular sequence and overflowed in memory block 540, to handle later (524).Overflow memory block 540 be by Form/index according to the mode for the order for not losing input message.So, input message 501 remains further to handle, unavailable In immediately treating (when out-of-sequence).
Outbound processor layer receives the message that will be handled according to criterion behavior, and it is correct to be practically at sequence order for wherein message State.
Once message 501 is processed, outbound processor layer just in memory block 540 is overflowed, is found next in sequence Pending message.If find such message, then handled according to standard, the message is pushed to outbound processor layer.Such as Fruit does not find message, then sequence state, which is retired, to be set to " wait ".
The order of each message in sequence is maintained.Sequence memory define indicate that keep sequence message it is suitable Sequence, and the sequence order of the order for the lower a piece of news that must be processed.Whenever the processing of completion message, sequence order is just incremented by more Newly.Then, sequence order is seen as counter.
With sequence order, i.e. any input message that the order of next pending message is not consistent is stored in spilling storage In area 540, untill entering station processor 510 and receiving correct pending message.Sum of ranks every of this meaning in view of sequence The order of message, carry out overflowing the insertion/go division operation in memory block 540.
Overflowed when message is stored in memory block 540, so that when its pending round in the sequence is co-pending, may The situation that sequence will never be unlocked by the lower a piece of news in sequence occurs.Although such case is not often to occur, but, this Invention provides a kind of dynamical fashion, leaves designator in the context of sequence, when thinking that sequence is expired, row is taken to sequence It is dynamic, for example abandon expired message or expired sequence.
By avoiding the use of concurrency, (that is, they adhere to only one to some message-oriented middlewares (being referred to as MOM) Leave one's post consumer), there is provided sequence signature.Then, they are using scalability as cost, there is provided sequence ensures. Advanced Queuing are exemplaries.
Some other MOM (for example, MQ-Series) provide the sequencing feature based on correlator really, but they will Ask and handle sequence message as sequence message is flocked together by logic.In addition, group size is necessarily limited, and MOM needs the additional restraint on processing of leaving one's post.
According to the distributed variable-frequencypump of the present invention while concurrency and scalability is kept, there is provided strict row Sequence, without in the special constraint in terms of handling association or processing message or the mode of service call of leaving one's post.The side of the present invention Method, the enhanced scalability of equipment and system and compliance make it possible to:
" deliver and exit " principle is utilized, completely asynchronous delivery processing is realized, in " deliver and exit " principle, disappears Breath is delivered, but process is not to wait for confirming, another process (confirmation process), which is responsible for receiving, to be confirmed, this allows to deal with high disappear Cease handling capacity;With
The processing being distributed completely is realized, and eliminates any affinity between sequence and Inbound/Outbound processor, so as to Allow the message of any Inbound/Outbound processor processing arbitrary sequence.
One process architecture may be to revert to by dealing with the commonsense method of prioritisation of messages, and one process architecture can complied with Property and scalability in terms of, cause greatly, unacceptable constraint sometimes.On the contrary, the present invention is in two aspects, inbound processing Device aspect and outbound processor aspect, it is allowed to whole benefits of distributed variable-frequencypump, while ensure to sort, as long as the base of sequence Number (cardinality) is higher.For this meaning only when system concurrently must handle substantial amounts of sequence, the present invention is just fully sharp Handled with Parallel Sequence.
For sequence maintenance, in the absence of any prerequisite on memory block and process of leaving one's post:
The queuing process and the queuing of the Message Processing in outbound processor layer that message in inbound processor layer receives Process need not support sequence to maintain, because this will be possible according to the present invention;
The parallel memorizing of message and fetch parallel and (that is, exclude/leave one's post) and kept completely;
According to non-limiting example, queue memory in itself can be local in node, and overflow memory block be it is global, That is, gone out station processor to share.Overflow storage needs to be shared, because arbitrary node can handle given sequence, then, it Should can access unique overflow area, to be actually lined up and leave one's post in the memory.If queue memory is not owned Go out station processor to share, then it can be exclusively used in it is single go out station processor, or be exclusively used in it is multiple go out station processor.Deposited in queue Reservoir not by it is all go out station processor it is shared in the case of, every message is only connect by only one local queue memory Receive.
The storage for the message being rejected is easier, because because message can be located in overflow area with Status Change, from Without exception queue.
According to the method, apparatus and system of the present invention, by carrying out the identification of sequence, and the knowledge of the message order in sequence Not, according to distributed parallel pattern, message sequence is handled.In addition to the identification, sequence will be managed and reset, including sequence Row locking and time-out.These aspects are described in detail below.
Recognition sequence
In the flowing of the message or event of shared given transmission channel, it is necessary to clearly in the sense that sequence will be observed Ground defines every group of related message.Fig. 6 represents sending transmission and treatment channel 620 between system 610 and processing system 630 It is interior, the exemplary flow chart of the processing of recognition sequence.
Special parameters are provided to each information receiving primitive being involved in given transmission.The special parameters are also referred to as sequences The sequence identifier of row correlation.It is usually by the alphanumeric values of the transmission default of message.The component being each related to Belong to mutually homotactic message using the actual identification of the parameter.For example, parsing message 1-4 in transmission channel 620, and it is identified as Message #1 ... #4.Although these related and orderly message share identical transmission channel 620, but they and it is discontinuous Ground follows each other.In transmission channel 620, they wind mutually with belonging to the message of another sequence.
Sequence relevant parameter is according to ensuring that the different conflict processes in transmission and process chain that it is not given share Mode define.In this sense, there is strict uniqueness must comply with.Preferably, sequence relevant parameter is determined Justice is the responsibility using the operation flow of system.
Message order in recognition sequence
Need that according to the message that particular order is handled two kinds of types can be classified into:
When message establishing, it is known that order or order in sequence, and preferably when message establishing, it is known that disappear The message of the first total species of breath;Wherein in primitive is transmitted, generation processing can specifically disappear to every message dispatch Cease ordinal number.The message ordinal number is then transmitted and preserved by each processing of the part as whole chain, until carrying out finally Untill processing;With
When generation, it is determined that the message of the second species of sequence in sequence.The message of these second species What processing was generally incremented by, every new information (or event) in meaning processing changes the knot of the processing of the first message in sequence Fruit.In this regard, the transmission system of message neither knows the message ordinal number of the message in given sequence, and does not know in sequence Message sum.
For brevity, in the present note, the message ordinal number of the message in given sequence is referred to as message order.
Core sequence management
As shown in diagram in Fig. 7, the asynchronous distributed processing system of illustration includes:
Formed inbound processor layer it is multiple enter station processor 710,720 ..., 740.Enter station processor and receive input Message 711,721,731 and 741, is respectively stored in these message in queue memory 750, and may be true to the system of transmission Recognize the good reception of these input message 711,721,731 and 743;
Formed outbound processor layer it is multiple go out station processor 760,770 ..., 790.Each go out station processor to be configured Message, and the processing message are fetched into from queue memory 750.Go out station processor to be also responsible for the message after processing to be transmitted to Using.
Enter station processor and be also referred to as recipient or recipient's process.Go out station processor be also referred to as processor or Delivery process.
Fig. 8 A represent the improvement of Fig. 7 embodiment, illustrate the of the sequence A received into station processor from transmission system The exemplary steps of the sequence processing of a piece of news.In this improvement, as it is multiple enter station processor 810,820 ..., 840 and It is multiple go out station processor 860,870 ..., a part for accumulation layer between 890, realize and be referred to as the additional of sequence memory 800 Component.Sequence memory 800 includes:
Concentration or public shared sequence mutexes 804 (also referred to as mutexes), to ensure to only have at one every time Manage message of the device in processing given sequence (or there is identical sequence correlation);It will be serviced and appointed by First Come First Served model What competition is attempted.
Concentration or public shared sequence state context 802 (also referred to as state), to be tieed up between all processes Hold shared sequence state;Sequence is uniquely identified by its sequence correlation.State also causes for each on given sequence Event, it can determine the behavior to be applied:
O meanings can make " wait " state that next input message queueing waiting is delivered;
Retained " delivery " state of the lower a piece of news of o meanings.
Concentration or public shared overflow storage 806 (also referred to as overflowing or overflow memory block), it is multiple to ensure Go out station processor and be merely able to access the lower a piece of news that will handle by them, other message in overflow storage " hang and not Certainly ".Overflow storage is used in current sequence state, the unripe tape index being delivered and orderly sequence disappears The memory of breath.
This 3 components as context processing information:State 802, mutexes 804 and overflow storage 806 have and The identical property of queue memory 850.Them can be realized in one below:
- the memory based on internal storage data or file, if all distributed process are operated in same node point, or Person
- client-server data storage storehouse, if distributed process operation is over several nodes, wherein server system If system is remote system.
By realizing accumulation layer and standard message memory in the public RDBMS engines of shared unique transaction, can obtain Obtain the most homogeneous between accumulation layer and standard message memory.
According to the method, apparatus and system of the present invention:
Queue memory 850 allow it is multiple enter station processor and it is multiple go out station processor between message exchange, and Worked independently of the ground of sequence memory 800;With
The overflow storage 806 of sequence memory 800 ensure it is multiple enter station processor 810,820 ..., 840 and multiple Go out station processor 860,870 ..., the sequence of message exchange between 890.
Fig. 8 A illustrate the sequence processing of the input message of the present invention:
The message 801-1 for belonging to sequence correlation " A " is entered station processor 810 and received;
Enter the mutexes 804 (812) of the lock sequence of station processor 810 " A ", so as to prevent any entering station processor or to go out Another a piece of news of the station processor processing with sequence correlation " A ";
Enter the foveal state context 802 (814) that station processor 810 checks sequence " A ":Wherein sequence is not present, or Person's sequence is in " wait " state;
Enter station processor 810 to be set as " in processing " (814) state context 802 of sequence " A ".The present invention is to defeated Enter the message order that message dispatch adds an increment equal to the order of the message of previous receipt.Because input message is the first of this sequence Bar message, therefore the message order for being dispatched to the message is set to 1.Message order is preferably kept in sequence memory 800, more Precisely, it is stored in sequence context 802.Enter station processor 810 and message 851 is preferably stored in queue memory 850 In (816);With
Enter station processor 810 to sender of the message to be confirmed, the mutexes of release sequence " A ", and be ready to receive Any other input message.
Fig. 8 B illustrate the following step of sequence processing, wherein receiving another input message in system:
The Article 2 message 801-2 for belonging to sequence " A " is entered station processor 820 and received;
Enter the mutexes 804 (822) of the lock sequence of station processor 820 " A ", so as to prevent it is any enter station processor 810, 830 or 840 or the another a piece of news that goes out in station processor processing sequence " A ";
Enter station processor 820 and check that wherein sequence state is the foveal state context 802 of the sequence " A " " in processing " (824).Due to can not allow it is any go out station processor obtain message, therefore enter station processor 820 message 807 is stored in it is excessive Go out in memory 806 (826).
The corresponding message order of an increment is added to be assigned to input message with the message order of previous input message.Due to The message order of first message is 1, so as to which the message order for being dispatched to input message is 2.In addition, the present invention is incrementally defined under the sequence The sequence order of the order of one pending message.So as to be stored in sequence memory 800 from mutually homotactic multiple messages In in the case of, their message order allows system to identify the correct message that must be forwarded to queue memory 850.Correctly Message is the message with message order corresponding with the sequence order defined in the sequence.Advantageously, this has in input message During the message order assigned by sender, and when reaching the order into the order assignment input message of station processor according to input message It is applicable.
Enter station processor 820 to sender of the message to be confirmed, the mutexes of release sequence " A ", and be ready to receive Any other input message.
Fig. 8 C illustrate the following step of sequence processing, wherein making to be stored in disappearing in the queue of queue memory Breath, which is left one's post, reaches out one of station processor, to handle:
Go out the message 851 (871) that one of station processor 870 fetches sequence " A " from queue memory 850.Due to this hair Bright, the message is the lower a piece of news that must be processed of the sequence automatically.Then, its order is processed stop press Order adds an increment.In this Illustrative Embodiments, because the message being stored in queue memory is that first of sequence A disappears Breath, therefore its order is necessarily " 1 ";
Go out station processor 870 the message dilivery that order is " 1 " to correlation recipient, or be further delivered to receiving Before person, another route device (873) is delivered to.Once have sent message in step 873, it is just available to go out station processor 870 In another processing.It does not receive also from recipient when receiving confirmation, go out station processor 870 and remain able to work Make.For example, the another a piece of news for belonging to another sequence can be fetched and send by going out station processor 870, so as to realize asynchronous throwing Pass, to improve handling capacity.It can also be received receives confirmation from any sender on any message.Then, it is outbound The message and the number of processing that processor is able to carry out are not limited by the response time of the sender of the message sent in step 873 System.
Recipient receives the message sent in step 873 from the delivery process 8701 for going out station processor 870.As sound Should, recipient sends confirmation message to system.It is identical go out station processor 870 confirmation process 8702, or another outbound place The confirmation process 8602 for managing device 860 receives confirmation message.This corresponds to the step 874 described in Fig. 8 C.
Fig. 8 D illustrate the following step of sequence processing, wherein being stored in next overflowed in memory block 806 Message is forwarded to queue memory 850, is forwarded to out one of station processor afterwards, to handle.Described in Fig. 8 D Step is triggered by the receiving 875 of the confirmation message for entering station processor in system.
Go out station processor 860 and sequence order (862) is checked in state context 802, the message being identified with determination In sequence, it is necessary to the order of processed lower a piece of news.Because sequence order is set to " 2 ", thus go out station processor 860 from overflow Go out the message 807 (809) that the sequence " A " with message order " 2 " is fetched in memory block 806.Then, message 807 will be stored in team In row memory 850, it is all go out station processor can all obtain the message.State context 802 is still " in processing ".Sequence Column rank is incremented by, so as to be set to " 3 ", so as to indicate that next pending message is disappearing for the message order with equal to " 3 " Breath.
Go out station processor 860 to exit, be ready for being stored in the another a piece of news in queue memory 850.
The processing continues, untill handling and having delivered the whole sequence of input message.
No matter receiving input message in the case of the sequence index of the order for the message being with or without in indicator sequence, above The processing illustrated in Fig. 8 A-8D is all identical.In the case where providing sequence index, it is used as the precedence that sorts;Otherwise, inbound Processor generates precedence according to the reception order in identical sequence.
Retracing sequence
Except what is above described in detail in Fig. 8 A-8D wherein enters station processor according to strict sequence order, receive defeated Outside the processing for entering message, also realize that identical is handled, to handle the message of received out of sequence.Uniquely actual prerequisite is Sender of the message provides index, order of the every message of the index instruction in sequence for every message in identical sequence.
As previously shown, the present invention is incremented by the sequence order of the order of next pending message of defined nucleotide sequence.When queue stores When device 850 can receive the message from given sequence, then check sequence order.Only there is the message order equal to sequence order Message is just forwarded to queue memory 850.If in the overflow area 806 of sequence memory 800, in the absence of with equal to sequence The message of the message order of column rank, then the processing of the sequence is stopped, until receiving disappearing with correct order from entering station processor Untill breath.So as to which sequence order works as the counter for the message for indicating to be processed.Preferably, sequence order is stored in In sequence memory 800.
Fig. 8 E are illustrated in the step of sequence processing of the input message 801 in the case of sequence reorganization:
Except by it is multiple enter station processor 810 ..., 840 carry out above-mentioned processing in addition to, for what is driven by indexed sequential Reset, other step can occur:
Compare the sum of ranks of the message indicated by the index of message to maintain sequence order and to be processed next disappear The order of breath.In order to maintain the order of sequence order and next message to be processed by being preferably incrementally updated in sequence memory Sequence order instruction.
If the order matching sequence order (818) of message, then input message 815 is stored in queue memory 850, Be available for it is multiple go out station processor utilize.Sequence state is configured in processing.Pending sequence order is incremented by, and application is above The processing of explanation.
If sequence order (819) is exceeded by the order of the message of index instruction, then message 813 is stored in spilling storage In device 806.Sequence state 802 is configured to " co-pending ".The message is actually not processed.Received when entering station processor with treating During the message of the expection message order of processing, sequence will restart.In this case, corresponding message will be protected by entering station processor Exist in queue memory, and sequence state is set to " in processing ".
For the processing shown in Fig. 8 A-8D, when going out station processor and terminating the work for message, it will overflow Found in memory 806 with sequence order (indicate that maintain sequence order and lower a piece of news to be processed order Sequence order) message of message order that is consistent.If find the message, then the message is stored in queue memory 850 In;If not, so sequence state or be configured to " co-pending " (if in overflow area exist the sequence message, still The message has and the unequal order of sequence order), or be configured to " wait " and (if in memory block is overflowed, be not present If the message of the sequence).
Supervisory sequence locks and time-out
The message received as previously mentioned, for given sequence is stored in overflow storage 806, as long as its message order is not Match pending message order.This is the locking situation for whole sequence, if next pending expected message not by Enter station processor reception.
In a particular embodiment, if present invention ensure that processing needs this locking situation really, then locking situation exists It is also limited in terms of time.Processing also definition is expressed as global sequence's timeout value of duration (being represented with second, minute, day ...).
In another embodiment, sequence context 802 can include the absolute time value for being defined as sequence time-out.Whenever entering Station processor goes out station processor and can access given sequence context record, means its processing category in a manner When message (message is the movable instruction in sequence) of sequence, the absolute time value is just updated to as current The value of system time and sequence time-out duration sum.
In another embodiment, it is possible to achieve timeout sequence collector, for periodic wakeup and scanning sequence context Complete list.In this special realization, any sequence out of date for its sequence time duration is detected.This processing Selection is realized using sequence timeout value.
, can be with according to the method, apparatus and system of the present invention depending on realizing:
Remove any corresponding message in overflow storage 806 and in sequence context 802;
Expired event for specific sequence, carry out any appropriate processing and record:
The asynchronous deliveries of o, for example each project is obtained in order, or when project needed for wait, neglected items, Untill project needed for finding,
O alarms ...
The present invention has a variety of applications in terms of data processing.But, the invention is particularly suited to:
Messaging server, such as Amadeus messaging servers (AMS):Wherein application processing continues message Sending and receiving, serve as the center (hub) in the foundation structure of company (more specifically software company).Booking tickets or preengaging in industry, AMS Reservation system, and departure control system can be used as by both.All teletype communication amounts are enforced and sorted.Telefacsimile messages are usual It is referred to as TTY.TTY Type Bs are the boats for exchanging message by asynchronous paths by the exact sequence of the processing of given function context Empty industry standard.For example, first list of the message package containing boarding passengers, second list includes the list for cancelling passenger, the 3rd row Table includes the list of increased passenger.These lists of passenger must be complied with strict order.
Another application field is, for example, OTF high level architectures (OHF), and OHF is to be used to realize that guarantee is different by many applications Walk the middleware software components delivered.Main application by OHF sequence is should in coupon database (CDB) and electronic passenger ticket The synchronization occurred between.Occur often in limited period of time, to single complimentary ticket, many changes may be made.These changes must be according to Electronic passenger ticket application or coupon database are correctly sequentially forwarded to, so that they keep synchronous.
Although being mainly illustrated above in the context for the travelling solution that airline provides, but ability The technical staff in domain should be understood that embodiments of the invention are not limited to use for airline, on the contrary, can also be changed, confession is other The travelling mode of species and travelling supplier use, non-limitative example include by ship, train, automobile, bus tour Provider, and the travel products in such as hotel etc.
Non-limitative example illustratively, explanation above provide the various sides for realizing the Illustrative Embodiments of the present invention The detailed and informative description of method, equipment and software.But, it is described above when being read with reference to accompanying drawing and accessory claim When, according to the above description, various modifications and adaptations can become obvious to one skilled in the art.But it is used as some examples Son, those skilled in the art can attempt other similar or equivalent processing or the use of algorithm and data representation.In addition, with It is merely illustrative in the various titles of different elements, function and algorithm (such as, etc.), it is not intended to the limitation present invention, Because any appropriate title can use to represent these various elements, function and algorithms.The teachings of the present invention all such and Similar modification is still within the scope of embodiments of the invention.
Furthermore, it is possible to advantageously using some features of Illustrative Embodiments of the invention, without accordingly utilizing other spies Sign.Thus, described above should be regarded as merely the present invention principle, teachings and embodiments for example, rather than to it Limitation.
The embodiment available digital electronic circuit of various technologies described herein, computer hardware, or hand-held electronic Device hardware, firmware, software, or combinations thereof are realized.Embodiment can be implemented so that program or software product, i.e. have It is embodied in information carrier, such as machine readable storage to shape, or the computer journey being embodied in transmitting signal Sequence, performed for data processing equipment, such as programmable processor, computer, tablet PC or multiple computers, or control The operation of the data processing equipment.Such as the program of above computer program etc can use any form of programming language, bag Include compiling or interpretative code write, can be disposed by arbitrary form, including be deployed to stand-alone program, or be deployed to module, component, Subroutine, or it is suitable for the other units used in a computing environment.Program can be deployed in a computer or flat board Performed on computer, either positioned at one place or be distributed in multiple places, and it is mutual by communication network or wireless network Performed on multiple computers or tablet PC even.
The processor of computer program is adapted for carrying out for example including general purpose microprocessor and special microprocessor, and arbitrarily Any one or more processors of the digital computer of species, tablet PC or electronic equipment.Generally, processor is from read-only Memory and/or random access memory, receive instruction and data.The element of computer includes at least one for execute instruction Individual processor, and for preserving one or more memory devices of instruction and data.Generally, computer or electronic equipment can also wrap The one or more mass storage devices for preserving data are included, for example, disk, magneto-optic disk or CD, and/or be operatively coupled to The mass storage device, data are received or transmit to be to and from it.
Can be including aft-end assembly, for example, the computing system as data server, or including middleware component, example Such as the computing system of application server, or including front end assemblies, for example, can be by it, with realizing reciprocation with user Graphic user interface or Web browser client computer computing system, or such rear end, middleware or front end In any combination of component, each embodiment is realized.The digital data communications of arbitrary form or medium can be utilized, for example, logical Communication network, wireless network or communication network, interconnect each component.Communication or communication network example include LAN (LAN) and Wide area network (WAN), for example, internet, or the wireless network of such as WiFi network etc.
Although here exemplified with record realization some features, but, those skilled in the art will recognize that many is repaiied Change, substitute, changing and equivalent.Then, will be apparent to accessory claim be intended to cover embodiments of the invention spirit and In the range of all such modifications and changes.

Claims (19)

1. a kind of computer implemented method to be sorted in distributed parallel system to the asynchronous message of distribution, the system tool Have to be formed inbound processor layer it is multiple enter station processor, and formed outbound processor layer it is multiple go out station processor, its feature It is that methods described includes the following steps carried out using at least one data processor:
It is the multiple enter station processor in arbitrarily enter station processor in, receive input message, it is described input message have know Not Bao Han the input message sequence sequence correlation,
Check the sequence state of the sequence in shared sequence memory;
It is determined that whether input message is to maintain the order of the message in the sequence and next message to be processed;
If-sequence state indicates the message for going out the current all untreated sequence of station processor in station processor layer, and And if input message is confirmed as next pending message of the sequence, then input message is transmitted to queue storage Device, then by outbound processor layer it is available go out station processor fetch the message, to handle;
If-sequence state indicate in station processor layer it is at least one go out station processor disappearing at present handling the sequence Breath;Or if the queue memory has included the pending message of the sequence;Or if input message is confirmed as It is not next pending message of the sequence, then input message is stored in the internal memory of shared overflow storage, Remain further to handle;
Wherein determine whether input message is to maintain the order of the message in the sequence and next message to be processed The step of include:
- determine that instruction inputs the message order of order of the message in the sequence,
If-input the message received does not possess any index of the message order of indicator sequence, then the step of determining message order Include the message order of the order to input message dispatch instruction input message in its sequence, and the message order of assignment is stored in sequence In row memory;
- compare message sum of ranks define the sequence next pending message order sequence order,
If-message order is equal to sequence order, then the message is determined as to maintain the suitable of the message in the sequence Sequence and next message to be processed,
If-message order is not equal to sequence order, then the message is determined not to be to maintain the message in the sequence Order and next message to be processed.
2. in accordance with the method for claim 1, wherein when going out the processing for the message that station processor completes given sequence, institute The sequence order for stating given sequence is incremented by, and wherein when the sequence order of sequence is incremented by, is disappeared if overflow storage includes Cease the order message equal with incremental sequence order, then the message is transmitted to queue memory.
3. in accordance with the method for claim 1, wherein the message order assigned corresponds to is entering station processor for the sequence The order of one of any stop press received adds an increment.
4. according to claim 1-3 it is one of any described in method, wherein the input message received in station processor is entered possesses The index of message order in indicator sequence.
5. according to claim 1-3 it is one of any described in method, if wherein queue memory does not include the sequence of input message Any message of row, and if the message order of input message is more than the sequence order indicated in sequence memory, then input Message is stored in overflow storage, until sequence order is incremented by, and untill the message order equal to input message.
6. according to claim 1-3 it is one of any described in method, wherein outbound processor asynchronous work, so as to allow outbound place Manage device and send message, and when sending the message and before the response confirmation of the recipient from the message is received, can For carrying out another processing.
7. according to claim 1-3 it is one of any described in method, message is sent to recipient wherein going out station processor and including Delivery process, and receive the confirmation process for receiving confirmation from recipient, deliver process and confirm process work independently.
8. according to claim 1-3 it is one of any described in method, wherein when receiving input message and before checking step, Carry out inbound lock step, wherein prevent it is all enter station processor receive the another a piece of news of the sequence, until inputting message It is stored in sequence memory, or untill being sent to queue memory, wherein inbound lock step includes locking special In the mutexes of the sequence, the mutexes are stored in sequence memory.
9. according to claim 1-3 it is one of any described in method, wherein when input message is provided from queue memory forwarding During station processor, carry out outbound lock step, wherein prevent it is all other go out station processor receive another of the sequence and disappear Breath, untill the processing until inputting message is completed, wherein outbound lock step includes the mutexes that locking is exclusively used in the sequence, The mutexes are stored in sequence memory.
10. in accordance with the method for claim 9, wherein when go out station processor can use when, it checks institute in queue memory The mutexes of the sequence of message are stated, and only just fetch the message when mutexes are not locked out.
11. according to claim 1-3 it is one of any described in method, wherein message order is more than the message of sequence order and is saved first In overflow storage, and the message is abandoned from overflow storage after the timeout value is reached, the timeout value is assigned to It is following one of any:The sequence and message of message.
A kind of 12. non-transitory computer-readable medium for including software program instructions, wherein passing through at least one data processing Execution of the device to software program instructions cause comprising according to claim 1-11 it is one of any described in method execution operation Progress.
13. a kind of distributed variable-frequencypump system to asynchronous message sequence, the system includes:
- include comprising at least one data processor it is multiple enter station processor inbound processor layer, at the multiple inbound The independently received multiple input message related to each sequence are each configured in reason device;
- include comprising at least one data processor it is multiple go out station processor outbound processor layer, the multiple outbound place Each it is configured to be independently processed from and forwards the multiple input message in reason device;With
It is characterized in that the system includes:
- accumulation layer of at least one internal memory is included, the accumulation layer includes:
Queue memory, for preserve to send to it is multiple go out station processor input message;
Shared sequence memory, comprising:
- for the sequence state context for the state for maintaining and updating each sequence for inputting message;With
- be configured to receive from the message for entering station processor and sequentially forward them to the shared of queue memory and overflow Go out memory,
The system is further configured to determine whether input message is to handle to maintain the order of the message in its sequence Next message, and carry out using at least one data processor carry out following steps;
If-sequence state indicates the message for going out the current all untreated sequence of station processor in station processor layer, and And if input message is confirmed as next pending message of the sequence, then input message is transmitted to queue storage Device, then by it is available go out station processor fetch the message, to handle;
If-sequence state indicate in station processor layer it is at least one go out station processor currently handle the sequence Message;Or if queue memory has included the pending message of the sequence;Or if input message be determined be not Next pending message of the sequence, then input message is stored in shared overflow storage, remained further Processing.
14. a kind of computer implemented travelling monitoring method, for parallel entering station processor and multiple parallel with multiple Go out in the parallel environment of station processor, handled between at least one server application and at least one client application asynchronous Message, the following steps that methods described is carried out using at least one data processor:
- it is the multiple enter station processor in enter to receive in station processor input message, there is the input message identification to wrap The sequence correlation of sequence containing the input message,
The sequence state of the sequence in-inspection sequence memory;
- determine whether input message is to maintain the order of the message in the sequence and next message to be processed;
If sequence state instruction it is multiple go out station processor in go out station processor at present all untreated sequence message, and And if input message is confirmed as next pending message of the sequence, then input message is transmitted to queue storage Device, then forward it to it is available go out station processor, to handle;
If sequence state instruction it is at least one go out station processor currently handle the message of the sequence;Or if queue Memory has included the pending message of the sequence;Or if input message is determined not being that the next of the sequence treats Handle message, then input message is stored in overflow storage, remains further to handle,
Wherein message package contains the data related to passenger, and sequence correlation includes the data related to the reference of transportation service.
15. in accordance with the method for claim 14, in addition to the message after processing from go out station processor be forwarded to it is following to It is one of few:Travelling reservation and seat reservation system, the stock system of airline, the E-ticket system of airline, airport are risen Fly control system, the operating system on airport, the operating system of airline, the operating system of floor control person.
16. according to claim 14 and 15 it is one of any described in method, wherein the reference of transportation service comprising it is following at least it One:Flight number, date and freight space are subscribed.
17. according to claim 14 and 15 it is one of any described in method, wherein message represents following one of any:Boarding Passenger, the passenger that flight is cancelled, increased passenger.
18. according to claim 14 and 15 it is one of any described in method, wherein provide sequence timeout value for each input message, To remove the input message being stored in overflow storage after sequence timeout value is reached, sequence timeout value is by flight Fly time triggered, or it is following one of any:The deadline of flight quotation, or the deadline of promotion.
A kind of 19. non-transitory computer-readable medium for including software program instructions, wherein passing through at least one data processing Execution of the device to software program instructions causes the operation comprising the execution according to one of any methods describeds of claim 14-18 Carry out.
CN201380036210.XA 2012-08-02 2013-08-01 To method, system and the computer program product of asynchronous message sequence in distributed parallel environment Active CN104428754B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP12368017.5 2012-08-02
US13/565,284 2012-08-02
US13/565,284 US8903767B2 (en) 2012-08-02 2012-08-02 Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment
EP12368017.5A EP2693337B1 (en) 2012-08-02 2012-08-02 Method, system and computer program products for sequencing asynchronous messages in a distributed and parallel environment
PCT/EP2013/002302 WO2014019701A1 (en) 2012-08-02 2013-08-01 Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment

Publications (2)

Publication Number Publication Date
CN104428754A CN104428754A (en) 2015-03-18
CN104428754B true CN104428754B (en) 2018-04-06

Family

ID=48953356

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380036210.XA Active CN104428754B (en) 2012-08-02 2013-08-01 To method, system and the computer program product of asynchronous message sequence in distributed parallel environment

Country Status (5)

Country Link
JP (1) JP6198825B2 (en)
KR (1) KR101612682B1 (en)
CN (1) CN104428754B (en)
IN (1) IN2014DN10080A (en)
WO (1) WO2014019701A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107093138B (en) * 2017-04-21 2019-04-30 山东佳联电子商务有限公司 Auction Ask-Bid System and its operation method based on distributed clog-free asynchronous message tupe
WO2018224659A1 (en) * 2017-06-08 2018-12-13 Amadeus S.A.S. Multi-standard message processing
EP3419250B1 (en) * 2017-06-23 2020-03-04 Vestel Elektronik Sanayi ve Ticaret A.S. Methods and apparatus for distributing publish-subscribe messages
CN110865891B (en) * 2019-09-29 2024-04-12 深圳市华力特电气有限公司 Asynchronous message arrangement method and device
CN111045839A (en) * 2019-12-04 2020-04-21 中国建设银行股份有限公司 Sequence calling method and device based on two-phase transaction message in distributed environment
CN111506430B (en) * 2020-04-23 2024-04-19 上海数禾信息科技有限公司 Method and device for processing data under multitasking and electronic equipment
CN111562888B (en) * 2020-05-14 2023-06-23 上海兆芯集成电路有限公司 Scheduling method for self-updating memory

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5588117A (en) * 1994-05-23 1996-12-24 Hewlett-Packard Company Sender-selective send/receive order processing on a per message basis
WO2003071435A1 (en) * 2002-02-15 2003-08-28 Proquent Systems Corporation Management of message queues
CN102414663A (en) * 2009-05-18 2012-04-11 阿玛得斯两合公司 A method and system for managing the order of messages
WO2012051366A2 (en) * 2010-10-15 2012-04-19 Attivio, Inc. Ordered processing of groups of messages

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09101901A (en) * 1995-10-06 1997-04-15 N T T Data Tsushin Kk System and method for message communication between processes performed on personal computer operated by multiprocess
US7698267B2 (en) * 2004-08-27 2010-04-13 The Regents Of The University Of California Searching digital information and databases
GB0613195D0 (en) * 2006-07-01 2006-08-09 Ibm Methods, apparatus and computer programs for managing persistence in a messaging network
WO2008105099A1 (en) * 2007-02-28 2008-09-04 Fujitsu Limited Application-cooperative controlling program, application-cooperative controlling method, and application-cooperative controlling apparatus
US8392925B2 (en) * 2009-03-26 2013-03-05 Apple Inc. Synchronization mechanisms based on counters

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5588117A (en) * 1994-05-23 1996-12-24 Hewlett-Packard Company Sender-selective send/receive order processing on a per message basis
WO2003071435A1 (en) * 2002-02-15 2003-08-28 Proquent Systems Corporation Management of message queues
CN102414663A (en) * 2009-05-18 2012-04-11 阿玛得斯两合公司 A method and system for managing the order of messages
WO2012051366A2 (en) * 2010-10-15 2012-04-19 Attivio, Inc. Ordered processing of groups of messages

Also Published As

Publication number Publication date
WO2014019701A1 (en) 2014-02-06
KR101612682B1 (en) 2016-04-14
CN104428754A (en) 2015-03-18
IN2014DN10080A (en) 2015-08-21
JP6198825B2 (en) 2017-09-20
JP2015527658A (en) 2015-09-17
KR20150037980A (en) 2015-04-08

Similar Documents

Publication Publication Date Title
CN104428754B (en) To method, system and the computer program product of asynchronous message sequence in distributed parallel environment
US7991847B2 (en) Method and system for managing the order of messages
US7444596B1 (en) Use of template messages to optimize a software messaging system
CN102469033A (en) Message subscription system and message sending method
US9448861B2 (en) Concurrent processing of multiple received messages while releasing such messages in an original message order with abort policy roll back
US20090015398A1 (en) System and method of providing location information of checked baggage
US20120076152A1 (en) System and method for priority scheduling of plurality of message types with serialization constraints and dynamic class switching
US9124448B2 (en) Method and system for implementing a best efforts resequencer
EP2693337B1 (en) Method, system and computer program products for sequencing asynchronous messages in a distributed and parallel environment
US8903767B2 (en) Method, system and computer program product for sequencing asynchronous messages in a distributed and parallel environment
CN116382943A (en) Sequential message processing method, bus system, computer device, and storage medium
US11494717B2 (en) System and method for supply chain management
CN109039846A (en) The method for avoiding deadlock, system and the transannular device of annular interconnection
Kim et al. On the Discrete‐Time GeoX/G/1 Queues under N‐Policy with Single and Multiple Vacations
EP3513292B1 (en) Multi-standard message processing
CN103426223B (en) Queuing and service system based on bar code
US10163076B2 (en) Consensus scheduling for business calendar
CN115545620B (en) Logistics transportation method and device based on block chain, electronic equipment and readable medium
US7761879B1 (en) System and method for workforce management
CA2677367A1 (en) Interface module

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant