CN101326508A - Intelligent messaging application programming interface - Google Patents
Intelligent messaging application programming interface Download PDFInfo
- Publication number
- CN101326508A CN101326508A CNA2005800460930A CN200580046093A CN101326508A CN 101326508 A CN101326508 A CN 101326508A CN A2005800460930 A CNA2005800460930 A CN A2005800460930A CN 200580046093 A CN200580046093 A CN 200580046093A CN 101326508 A CN101326508 A CN 101326508A
- Authority
- CN
- China
- Prior art keywords
- message
- programming interface
- application programming
- application
- api
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
- Computer And Data Communications (AREA)
Abstract
Message publish/subscribe systems are required to process high message volumes with reduced latency and performance bottlenecks. The intelligent messaging application programming interface (API) introduced by the present invention is designed for high-volume, low-latency messaging. The API is part of a publish/subscribe middleware system. With the API, this system operates to, among other things, monitor system performance, including latency, in real time, employ topic-based and channel-based message communications, and dynamically optimize system interconnect configurations and message transmission protocols.
Description
To quoting of previous submit applications
The application requires the U.S. Provisional Application sequence number No.60/641 of " the Event Router System andMethod " by name of submission on January 6th, 2005, the U.S. Provisional Application sequence number No.60/688 of " the Hybrid Feed Handlers And Latency Measurement " by name that submitted on June 8th, 988 and 2005,983 right of priority and by reference and in conjunction with above-mentioned application.
The U.S. Patent Application Serial Number 11/316,778 (application attorney docket 50003-0004) of " the End-To-EndPublish/Subscribe Middleware Architecture " by name that submits in the application and on Dec 23rd, 2005 is relevant and by reference and in conjunction with above-mentioned application.
Technical field
The present invention relates to data-message and transmit middleware architecture, and the application programming interface in the messaging system that relates more specifically to have issue and order (after this being called " publish/subscribe ") middleware architecture.
Background technology
The data-message transmission is transmitted the desired augmented performance day by day of infrastructure level and is forced the development of networking infrastructure and agreement.Basically, data distribution relates to various data source and destination, and the communication pattern between various types of interconnection architecture and the data source and destination.The example of available data transmission of messages carrier architecture comprises wheel shaft radial (hub-and-spoke), to equation and storage relay type.
Utilize wheel shaft spoke system configuration, all by the wheel shaft transmission, this can cause performance bottleneck usually when treatment capacity is big in all communications.Therefore, this messaging system has produced the stand-by period.A kind of method of walking around this bottleneck is an arrangement more service device, and distributed network load between these different servers.But this architecture shows extensibility and operational issue.Compare with the system with the configuration of wheel shaft spoke, the system with equity configuration has produced unnecessary pressure with processing and filtering data to application, and only the same fast with its slowest client or node.And the system with store-and-forward system configuration will store these data before the next node in forwarding the data to the path for persistence is provided.Storage operation is usually by index with message is write memory disc realize, this may produce performance bottleneck.In addition, when size of message has increased, index and write task may be quite slow, therefore may introduce the extra stand-by period.
Available data message carrier architecture has some shortcomings.Common deficiency be in existing architecture the data-message transitive dependency in the software that resides on the application layer.This means that message transmission infrastructure will experience OS (operating system) queuing and network I/O (I/O), this may produce performance bottleneck.Another common deficiency is that existing architecture is used Data Transport Protocol statically rather than dynamically, even other agreements may more suitablely also be like this in some cases.But some examples of common agreement comprise routing multicast, broadcasting or clean culture.In fact, the application programming interface (API) that has now in the architecture is not designed to switch between host-host protocol in real time.
In addition, network configuration decisions is normally carried out when arrangement, and is generally defined as under ad hoc hypothesis a group network and message condition of transmitting are optimized.Got rid of the Real-time and Dynamic Network reconfiguration with the restriction that static (fixing) configuration is associated.In other words, existing architecture is at the particular communication protocol configuration, and this host-host protocol always is not fit to all-network data transmission loading condition, and therefore, existing architecture always can not be tackled in real time and change or the increased load ability need.
In addition, when specific recipient or recipient group are gone in the data-message transmission, but existing message carrier architecture uses routing multicast that data transmission is crossed network.But, in the system that sets up at multicast, existence be to can being used for the limited in number of multicast group of distributing data, the result, and messaging system no longer sends data to not by to its subscription purposes ground (promptly not being the subscriber's of this particular data consumer).Because data filter, this has increased client's data processing load and loss ratio.Therefore, the client who becomes overload and can not catch up with data stream owing to any reason finally abandons and enters data, and requires after a while to retransmit.Re-transmission impacts total system, because all clients receive the transmission of repetition, and all clients handle again to entering data.Therefore, re-transmission may cause the multicast storm, and finally may make the total system paralysis.
In system is to set up when being used as reducing a kind of method of loss ratio at the unicast messages transmission, and this messaging system may experience bandwidth because data are duplicated saturated.For example, if ordered interested given topic more than a client, then messaging system must be with this data delivery to each subscriber, and in fact, system sends to each subscriber with the difference copy of these data.Although this has solved the problem of the non-subscription data of client's filtering, unicast transmission can not be expanded, and therefore is not suitable for ordering the large customer base group or the extremely overlapping situation of consumption mode of particular data basically.
In addition, in the path between publisher and subscriber, propagate in the jumping (hop) of message between using, wherein each jumping has all been introduced application and operating system (OS) stand-by period.Therefore, total end-to-end stand-by period increases and increases along with the number of jumping.Also have, when from the publisher during, limited by the slowest node in the path along the message throughput in path, and can't realize that in existing system end-to-end message transmission current control overcomes this restriction to subscriber's route messages.
Another common disadvantage of existing architecture is their slow and frequent a large amount of protocol conversion.This is because of IT (infotech) expedient (band-aid) strategy in enterprise's application integration (EAI) field, and wherein increasing new technology is by integrated with Legacy System.
Therefore, need in many zones, improve the performance of data-message transmission system.The example that performance may need to improve part is speed, resources allocation, stand-by period or the like.
Summary of the invention
The present invention is based in part on above-mentioned observation and based on following idea, promptly can use diverse ways to solve this shortcoming and has better result.These observations have caused the generation of the end-to-end message publish/subscribe middleware architecture of the message transmission that is used for high power capacity and low latency, have caused the generation of intelligent messaging application programming interface (API) especially.Therefore, for with communicating by letter of using, data distribution system with end-to-end message publish/subscribe middleware architecture can advantageously transmit significantly higher message capacity and have the significantly lower stand-by period, and described end-to-end message publish/subscribe middleware architecture comprises according to the smart message of principle of the present invention transmits API.In order to realize this purpose, the present invention for example imagine by reliably, highly available, conversation-based fault-tolerant design and improve communicating by letter between API and the message transmission device by the various combinations of introducing following feature, described feature comprises: mode binding, part issue, protocol optimization, Real-time Channel optimization, increment late calculate definitional language, smart message delivery network interface hardware, application DMA (direct memory visit), system performance monitoring, message flow control, have interim message buffered transmission logic and the Message Processing of rising in value.
Thereby, the purpose of describing according to overview as shown in the figure and here of the present invention, a kind of be used to use with the publish/subscribe middleware system between the exemplary API that communicates by letter comprise communication engines, one or more counterfoil, and interprocess communication bus (we abbreviate bus as).In one embodiment, when for example using single communication engines to receive more than an application and sending message, communication engines can be implemented as finger daemon (Daemon).In another embodiment, communication engines can be compiled in the application together with counterfoil, to eliminate the extra jumping of guarding.In this case, the bus between communication engines and the counterfoil will be defined as in-process communication bus.
In this embodiment, described communication engines be configured to serve as use with the publish/subscribe middleware system between the gateway of communicating by letter.The operation of described communication engines is transparent to application, thereby is used to use the message transmission protocol of Dynamic Selection that protocol optimization is provided, and is used for transmission channel resources and stream are monitored and dynamically control in real time.Described one or more counterfoil is used to communicating by letter between described application and the described communication engines.And then described bus is used to communicating by letter between described one or more counterfoil and the described communication engines.
Still according to purpose of the present invention, second example of API comprises communication engines, one or more counterfoil and bus.Communication engines among this embodiment is based upon on the logical layer that comprises message layer and message transport layer, wherein said message layer comprises using sends routing engine, administrative messag layer and message Route Selection engine, and wherein said message transport layer comprises the channel management part, and described channel management partly is used for using the transmission path of controlling the message of being handled by described message layer in real time based on system resource.
The above embodiments are two of example that are used for realizing API, and other examples will become apparent with the description of back with reference to the accompanying drawings.In a word, according to the description here, described claims and the accompanying drawing after this described, these and other features of the present invention, aspect and advantage will become clearer.
Description of drawings
Be incorporated into this accompanying drawing of also forming the part of this instructions and illustrate various aspect of the present invention, and be used for illustrating principle of the present invention together with specifically describing.For convenience, will use identical label to indicate same or analogous element in the whole accompanying drawing.
Fig. 1 shows end-to-end in accordance with the principles of the present invention middleware architecture.
Fig. 1 a shows the figure of overlay network (overlay network).
The figure of the enterprise infrastructure that the end-to-end in accordance with the principles of the present invention middleware architecture of utilization that shows Fig. 2 realizes.
Fig. 2 a shows the figure with enterprise infrastructure physics arrangement of creating the non-messaging device of changing (MA) of net's backbone between two parties.
Fig. 3 shows the architecture based on the messaging system of channel.
Fig. 4 shows a kind of possible message format based on topic.
Fig. 5 shows message Route Selection and the routing table based on topic.
Fig. 6 shows intelligent messaging application programming interface (API).
Fig. 7 shows the influence of self-adaptation message flow control.
Fig. 8 a and Fig. 8 b show the configuration of intelligent network interface card (NIC).
Fig. 9 shows conversation-based fault-tolerant (fault tolerant) design.
Figure 10 shows the interface of message transmission device (MA) to API.
Embodiment
The description here provides the details of details, particularly intelligent messaging application programming interface (API) of the end-to-end middleware architecture of message issue-order system according to various embodiments of the present invention.Before the details of summarizing these various embodiment, be simple declaration below to term as used in this specification.Notice that this explanation only is in order to clarify and to provide may how using the understanding of these terms to the reader, but whether these terms is limited in the context that uses them, neither so limit the scope of claims.
Term " middleware " uses as a general terms in computer industry, at any programming of coordinating between the common already present program of two separation.The purpose of adding middleware is to unload some complicacy that are associated with message exchange from application, and wherein this is to wait by the communication interface between all participants in the define grid (publisher and subscriber) to realize.Generally speaking, the middleware program passing service that gives information is so that different application programs can be communicated by letter.Utilize the middleware software layer, can seamlessly carry out the message exchange between the application.Usually be known as enterprise's application integration (EAI) together by utilizing middleware that different application programs is attached in system.But in this context, " middleware " can be a kind of wider term, and the message transmission and the arrangement that are used between source and destination realize in the context of the equipment that this message is transmitted; Therefore, middleware architecture has covered networking and computer hardware and the component software of realizing the transmission of efficient data message separately or with the combination that below will describe.In addition, term " messaging system " or " middleware system " can be used in the context of publish/subscribe system, and in this system, the message delivery server manages the message Route Selection between publisher and subscriber.In fact, the normal form that message is transmitted publish/subscribe in the middleware is extendible, is a kind of strong model therefore.
Term " client " can be used in the context of client-server application etc.In an example, the client is a kind of like this system or application, and it utilizes application programming interface (API) to be registered to middleware system, with ordering information, and receives the data that this middleware system sends or sends the data that will be delivered to this middleware system.The API of inside, middleware architecture border is a kind of client; And the external client is any publish/subscribe system (perhaps external data destination) of not using this API, and for communication with it, message will be by protocol conversion (will describe) after a while.
Term " external data source " can be used in the context of data distribution and message publish.In one example, external data source is considered to be positioned at enterprise private or outside system or application, and it adopts the messaging protocol of one of common protocol or its oneself to give out information.An example of external data source is a market data exchange, and it issues stock quotes, and stock quotes is distributed to the deal maker via middleware system.Another example of external data source is a transactional data.Note, in the back with in the typical implementation of the present invention in greater detail, middleware architecture adopts its unique native protocol, in a single day enter this middleware system territory from the data of external data source and just be converted into this unique native protocol, thereby avoided typical multi-protocols conversion in the legacy system.
Term " external data destination " also is used in the context of data distribution and message publish.For example, the external data destination is to be positioned at enterprise private or outside system or application, and it orders the information that is routed via this locality/global network.An example of external data destination can be the aforementioned market data exchange of handling by the transaction orders of deal maker's issue.Another example of external data destination is a transactional data.Notice that in aforementioned middleware architecture, the message of going to the external data destination is translated into the external protocol that is associated with this external data destination from native protocol.
Term " bus " is used to describe interconnection usually, and it can be based on the interconnection of hardware or software.For example, the term bus can be used to describe the interprocess communication link such as using socket and shared storage, and it also can be used to describe such as the so in-process link of function call.Can confirm from the description here, can utilize the intelligent messaging application programming interface of in middleware architecture, realizing (after this being called " API ") to implement the present invention in every way with various configurations.Therefore description begins with the example of end-to-end middleware architecture shown in Figure 1.
This example architecture has made up many useful features, and these useful features comprise: message transmit common concept, API, fault-tolerant, be provided with and management (P﹠amp; M), (QoS-merges service quality, do one's best, guaranteed connection simultaneously, the guaranteed while is unconnected, or the like), the guaranteed lasting buffer memory of sending QoS, the management of NameSpace and security service, the publish/subscribe ecosystem (core, the entrance and exit assembly), transmit transparent message transmission, message transmission based on neighbours is (a kind of as the wheel shaft spoke, the model of the mixture between equity and storage are transmitted, this model uses based on the routing protocol of ordering, can where necessary order be propagated into all neighbours), plan binding late, part issue (relative with all data, as only to issue the information that changes) and dynamic assignment network and system resource.The back will illustrate that the publish/subscribe middleware system combines the fault-tolerant design of middleware architecture valuably.In each publish/subscribe ecosystem, there are at least one or a plurality of (normally two or more) message transmission device (MA), wherein each message transmission device is configured to serve as edge (outlet/inlet) MA or core MA.Notice that the core MA of the publish/subscribe ecosystem partly uses aforementioned local message host-host protocol (for middleware system this locality), and the entrance and exit part, edge MA is then respectively to this native protocol translation or from this native protocol translation.
Except the publish/subscribe system assembly, the figure of Fig. 1 also shows the logic connection between them and communicates by letter.From scheming as seen, shown middleware architecture is the middleware architecture of distributed system.In having the system of this architecture, the logic communication between two distinct physical assemblies is to utilize message flow and the messaging protocol that is associated is set up.Message flow comprises one of two class message: management and data-message.Administrative messag is used to manage and controls different physical assemblies, manages the order to data, or the like.Data-message is used for transmitting data between source and destination, and in typical publish/subscribe message is transmitted, has a plurality of senders and a plurality of recipient of data-message.
Structural arrangements shown in the utilization and logic communication, this distributed message transmission system with publish/subscribe middleware architecture is designed to carry out multiple logic function.A kind of logic function is a message protocol translation, and this function is advantageously carried out at edge message transmission device (MA) assembly place.This is to be independent of lower floor's transmission logic at the native protocol of message and to be performed because the communication in the border of publish/subscribe middleware system is utilization.Here it is is called the reason of the transparent message carrier architecture based on channel of transmission with this architecture.
Second kind of logic function is that message is routed to the subscriber from the publisher.Notice that these message were routed whole publish/subscribe network.Therefore, routing function is carried out by each MA that wherein spreads news, and, to core MA 108a-c, to another core MA, finally arrives edge MA (for example, 106b) or API110a-b from a core MA from edge MA 106a-b (perhaps API) that is.API 110a-b via interprocess communication bus (socket, shared storage etc.) or via such as the such in-process communication bus of function call with use 112
1-nCommunication is with issue and subscribe message.
The third logic function is at dissimilar guaranteed delivery quality of service storing messages, comprises for example guaranteed connect simultaneously unconnected with the guaranteed while.This realizes by adding the storage forwarding capability.
The 4th kind of function is delivering these messages to the subscriber.(as shown in the figure, API 106a-b is delivered to message to order and uses 112
1-n).
In this publish/subscribe middleware architecture, system configuration function and other management and system performance monitoring function are by P﹠amp; The M system management.Configuration relates to the physics and the logic configuration of publish/subscribe middleware system network and assembly.Monitoring and report relate to be monitored and reports the result automatically the health of all-network and system component, this be carry out according to each requirement or be logged.P﹠amp; The M system utilizes administrative messag to carry out its configuration, monitoring and function of reporting.In addition, P﹠amp; The message NameSpace that the M system allows system manager's definition to be associated with each message that is routed time this publish/subscribe system.Therefore, the publish/subscribe network can be physically and/or logically is divided into sub-network based on NameSpace.
P﹠amp; The M system management has the publish/subscribe middleware system of one or more MA.These MA depend on that their roles in system are edge MA or core MA by arrangement.Edge MA is big many-sided similar with core MA, and except it comprised the protocol translation engine, the protocol translation engine was translated into message native protocol and translated into external protocol from native protocol from external protocol.Therefore, in general, the border of publish/subscribe middleware architecture in the messaging system (that is, end-to-end publish/subscribe middleware system border) characterized by the edge that wherein has edge MA 106a-b and API110a-b; And in these borders, there is core MA 108a-c.
Notice that system architecture is not limited to specific limited geographic area, and in fact, system architecture is designed to surmount zone or national boundary, even cross over the continent.In this case, the edge MA in network can via existing networking infrastructure and geographical go up away from another network in edge MA communicate by letter.
In exemplary systems, core MA 108a-c will be in the inner message route of issuing of this publish/subscribe middleware system to edge MA or API (for example, API 110a-b).Especially the Route Selection figure in core MA designed to be used maximum, low latency and Route Selection efficiently.In addition, the Route Selection between the core MA can Real-time and Dynamic change.For the given message bang path that passes a plurality of nodes (core MA), the real time altering of Route Selection is based on one or more tolerance, and these tolerance comprise network utilisation, total end-to-end stand-by period, the traffic, network and/or message delay, lose and shake.
Perhaps, be not the best execution route of Dynamic Selection from two or many different paths, but MA can carry out the multipath Route Selection based on message copy, thereby and send identical message by all paths.All MA that are positioned at the convergent point place in different paths will abandon the message of duplicating, and only transmit the message of first arrival.This route selection method has the message of making and transmits the advantage that infrastructure is optimised at low latency; But being infrastructure, the shortcoming of this Route Selection need the more network bandwidth to transmit the flow that duplicates.
Edge MA has such ability: any external message protocol that will enter message converts the native message protocol of middleware system to; And the external protocol that converts outbound message from native message protocol to.That is, when message entered publish/subscribe network domains (inlet), external protocol was converted into this locality (for example, Tervela
TM) messaging protocol; And when message was left publish/subscribe network domains (outlet), native protocol was converted into external protocol.Edge MA also operates the external data destination that is used for announced message is delivered to order.
In addition, edge and core MA 106a-b and 108a-c can both be before transmitting message storing message.A kind of method that can realize this function is to utilize caching engine (CE) 118a-b.One or more CE can be connected to identical MA.In theory, do not think that API has this storage transfer capability, although in fact API 110a-b can be before message be delivered to application storing message, and it can store them before being delivered to core MA, edge MA or another API in (promptly by the use issue) message that will receive from application.
When MA (edge or core MA) had flexible connection to CE, its whole or subclass with the message that is routed was forwarded to CE, and CE writes in the storage area them to realize persistence.In the section, these message can be used for retransmitting then when being requested at the fixed time.The example that has wherein realized this feature has data relaying, part issue and various levels of quality of service.Part is distributed on the minimizing network and client's load aspect is effectively, because it requires only to send updated information rather than all information.
In order to illustrate how Route Selection figure may realize Route Selection, several examples in publish/subscribe Route Selection path have been shown among Fig. 1.In this diagram, the middleware architecture of publish/subscribe network provides five or more communication path between publisher and subscriber.
First communication path is linked to the external data destination with external data source.From external data source 114
1-nGiving out information of receiving is translated into this locality (for example, Tervela
TM) messaging protocol, then by edge MA 106a route.The route that native protocol message can be routed from edge MA 106a is to external data destination 116n.This path is known as communication path 1a.In this case, native protocol message is converted into the external protocol message that is suitable for this external data destination.Another route that native protocol message can be routed from edge MA 106a is inner by core MA 108b.This path is known as communication path 1b.Along this path, core MA 108b is routed to edge MA 106a with local message.But MA 106a routes the native protocol messages to external data destination 116 at the edge
1Before, it converts them to and is suitable for this external data destination 116
1External message protocol.As seen, this communication path does not require that API is routed to the subscriber with message from the publisher.Therefore, if publish/subscribe system is used to the communication of external source to the destination, then this system need not to comprise API.
Another communication path that is known as communication path 2 utilizes API 110b that external data source 114n is linked to an application.The announced message that receives from external data source is translated into native message protocol at edge MA106a, is routed to core MA 108 by this edge MA then.From the first core MA 108a, these message were routed another core MA 108c and arrived API 110b.From this API, these message are delivered to order uses (for example, 112
2).Because this communication path is two-way, so in another example, message can use 112 from ordering along reverse path
1-nArrive external data destination 116n.In each example, core MA receives native protocol message and route native protocol message, and edge MA receives outside or native protocol message, and route this locality or external protocol message (edge MA translates into native message protocol/translate into this external message protocol from native message protocol with this external message protocol) respectively.Each edge MA can be routed to entry message native protocol channel and external protocol channel simultaneously, no matter this entry message still is to arrive as external protocol message as native protocol message.As a result, each edge MA can be routed to outside and internal customer simultaneously with entry message, internal consumers consume native protocol message wherein, and external consumers consume external protocol message.This ability makes message transmit infrastructure and can use and system is seamless and integrated smoothly with leaving over.
Be known as two application of another communication path link of communication path 3, these two application all utilize API 110a-b.During these are used at least one gives out information or subscribe message.Announced message is to utilize the API that is positioned at the publish/subscribe network edge to realize to sending of order using or from sending of having given out information of using of issue.When applications subscribe message, to this API, this API notifies to order when data are just prepared to be delivered to them then and uses with the message route for one of core or edge MA.Be sent to this API by " registration " core MA 108c from the message of using issue via this API to it.
Notice that to a MA, this API becomes and logically is connected to this MA by " registration " (login).API is initiated to the connection of this MA by sending registration (" login " request) message to MA.After registration, this API can order specific interested topic by its subscribe message being sent to this MA.Topic is used to the transmission of publish/subscribe message, defines the shared access domain and the target of message, therefore, orders one or more topics and allows reception and send to have the message of this topic note.P﹠amp; M will authorize and upgrade the MA send in the network in the cycle, and each MA correspondingly upgrades its oneself form.Therefore, if find that API will be authorized to order specific topic (this MA utilizes the Route Selection authorization list to verify the mandate of this API), then this MA is activated to the logic connection of this API.Then, if this API suitably is registered to core MA 108c, then core MA 108c routes data to the 2nd API 110, as shown in the figure.In other examples, this core MA 108b can be by extra one or more core MA (not shown) route messages, and these one or more core MA route messages to API 110b, and AP 110b is delivered to message then to order and uses 112
1-n
As seen, communication path 3 does not require and has edge MA, because it does not relate to any external data messaging protocol.At one communication path is here provided among the embodiment of example, business system is configured with NEWS SERVER, and this NEWS SERVER is to the latest news of employee's issue about multiple topic.In order to receive news, the employee orders their interested topics via the news browser application of utilizing API.
Notice that middleware architecture allows to order one or more topics.In addition, this architecture is passed through the asterisk wildcard in the permission message note, thereby utilizes single subscription request to order one group of relevant topic.
Another the communication path that is known as communication path 4 is and P﹠amp; One of mulitpath that M system 102 and 104 is associated, every in these paths with P﹠amp; M is linked to one of MA in the publish/subscribe network middleware architecture.At P﹠amp; The message that comes and goes between M system and each MA is administrative messag, and administrative messag is used for this MA is configured and monitors.In a kind of system configuration, P﹠amp; The M system directly communicates by letter with MA.In another kind of system configuration, P﹠amp; The M system communicates by letter with some MA by other MA.In another configuration, P﹠amp; The M system can directly or indirectly communicate by letter with MA.
In typical implementation, middleware architecture can be deployed on the network, this network has switch, router and other networked devices, and it adopts the message transmission based on channel, and this message transmission can be by the physical medium communication of any kind.A kind of exemplary implementation of the unknowable message transmission based on channel of this framework is based on the network of IP.In this environment, UDP (datagram protocol) execution is all passed through in all communications between all publish/subscribe physical assemblies, and transmission reliability is realized by message transport layer.Fig. 1 a shows the overlay network according to present principles.
As shown in the figure, covering communication 1,2 and 3 can take place between three core MA 208a-c via switch 214a-c, router two 16 and subnet 218a-c.In other words, these communication paths can be based upon on lower floor's middleware network, and described lower floor network comprises networking infrastructure, for example subnet, switch and router, and as mentioned above, this architecture can be crossed over bigger geographic area (different countries continent not even together).
Clearly, aforementioned in accordance with the principles of the present invention and other end-to-end middleware architecture can be implemented in the various enterprise infrastructure in the various business environments.Fig. 2 shows a kind of such implementation.
In this enterprise infrastructure, market data distribution plant 12 is structured on the publish/subscribe network, and this publish/subscribe network is used for from each market data exchange equipment 320
1-nStock market quotation be routed to deal maker's (unshowned application).This covering solution depends on lower floor's network for example to be provided between the MA and this MA and P﹠amp; Interconnection between the M system.To API 310
1-nMarket data delivery be based on applications subscribe.Utilize this infrastructure, the deal maker of utilization application (not shown) will be from API 310
1-nTrading card place back market data exchange equipment 320 by publish/subscribe network (via core MA308a-b and edge MA 306b)
1-n
An example of lower floor's physics arrangement has been shown among Fig. 2 a.As shown in the figure, MA directly is connected to each other, and is inserted directly into network and subnet, and the client of message transmission flow and publisher are by physical connection in network and subnet.In this case, interconnection should be direct connection, that is, and and the direct connection between the MA and they and P﹠amp; Direct connection between the M system.This makes it possible to realize the non-change between two parties of net's backbone, and message is transmitted flow and other enterprise's application traffic physical separation.Effectively, MA can be used to remove and transmit the dependence of traditional route network of flow to being used for message.
In this example of physics arrangement, external data source such as market data exchange equipment or destination are directly connected to edge MA, for example, and edge MA1.Message such as marketing is used transmits traffic consumes or the issue application is directly connected to subnet 1-12.These application have two lines at least, be used for ordering, issue or with other application communications; They can or utilize enterprise backbone or utilize message to transmit backbone network, and wherein enterprise backbone comprises the router and the switch of multilayer redundancy, and they transmit all enterprise's application traffics, include but not limited to, message is transmitted flow; Message is transmitted backbone network and is comprised via the integrated switch edge and the core MA of direct interconnection each other.
Utilize to replace backbone network and have such advantage: message is transmitted flow isolate with other enterprise's application traffics, thus the performance of control messages transmission flow better.In one implementation, be arranged on the applied logic of subnet 6 or physically be connected to core MA3, utilize Tervela API to order or the message traffic of issue native protocol.In another kind of implementation, be arranged on the applied logic of subnet 7 or physically be connected to edge MA1, order or the message of issue external protocol is transmitted flow, wherein this MA utilizes the conversion that carries on an agreement of integrated protocol conversion engine modules.In logic, the physical assemblies of publish/subscribe network is structured on 1 to 4 layer the message transport layer that is similar to the OSI(Open Systems Interconnection) reference model.1 to 4 layer of osi model is respectively Physical layer, data link layer, network layer and transport layer.
Therefore, in one embodiment of the invention, the publish/subscribe network passes through for example one or more message transfer line of insertion an outpost of the tax office in the subclass of all-network switch and router or the network switch and router, thereby can be directly deployed in lower floor's network/framework.In another embodiment of the present invention, the publish/subscribe network can be used as mesh overlay network (wherein, all physical assemblies all are connected to each other) and by arrangement effectively.For example, the complete mesh network of 4 MA is such network, and wherein, each MA is connected to each among its 3 reciprocity MA.In typical implementation, the publish/subscribe network is the mesh network of following assembly: one or more external data sources and/or destination, one or more setting and management (P﹠amp; M) system, one or more message transmission device (MA), one or more optional caching engine (CE), and one or more optional application programming interface (API).
As previously mentioned, the communication in the border of each publish/subscribe middleware system is to be independent of lower floor's transmission logic utilization to carry out at the native protocol of message.Here it is is called the reason of the transparent message carrier architecture based on channel of transmission with this architecture.
Fig. 3 illustrates in greater detail the message carrier architecture 320 based on channel.Generally speaking, every communication path between the message transmission source and destination is restricted to a piece of news and transmits channel.Every channel 326
1-nUtilize the interface 328 between channel sources and the channel destination
1-nSet up by physical medium.Every such channel is to set up at special messaging protocol, and described messaging protocol for example is local (for example, Tervela
TM) messaging protocol or other.Only edge MA (those MA that the entrance and exit of publish/subscribe network is managed) utilizes channel message protocol (external message protocol).Based on channel message protocol, channel management layer 324 determines to enter with outbound message whether require protocol translation.At each MA place, edge, be different from native protocol if enter the channel message protocol of message, then channel management layer 324 will be before the message that will handle be delivered to local message layer 330, by they being sent protocol translation engine (PTE) 332, thereby carries on an agreement translation.Equally, at each MA place, edge, if the native message protocol of outbound message is different from channel message protocol (external message protocol), then channel management layer 324 will be before the message that will handle be routed to transmission channel 3261-n, they are sent protocol translation engine (PTE) 332, thereby carried on an agreement translation.Thereby channel pair and the interface 3281-n of physical medium, the particular network that is associated with this physical medium and transmission logic and message components or fragment manage.
In other words, channel management OSI transmitting layer 3 22.Optimization to channel resource is performed (for example, based on the message density optimization of consumption patterns to physical medium, described consumption patterns comprises bandwidth, message size distribution, channel destination resource and channel health statistics) based on every channel.Then, unknowable because communication channel is a framework, so do not require the framework of particular type.In fact, any framework medium all will be worked, for example, and ATM, Infiniband or Ethernet.
Incidentally, when for example single message is split to a plurality of frames or a plurality of message and is packaged in the single frame, may need message fragment or reorganization.Message fragment or be binned in message and be delivered to and be performed before the channel management layer.
Fig. 3 further shows the multiple possible channel implementation in having the network of middleware architecture.In a kind of implementation 340, communication is to utilize by the too multicast of the network of net exchange, carries out via based on network channel, and wherein the network of Ethernet exchange serves as the physical medium that is used for this communication.In this implementation, the source sends message to the destination group with each udp port on its related IP address (being multicast therefore) from its IP address via its udp port.In the variant 342 of this implementation, the communication between the source and destination is to utilize the UDP clean culture to realize by the network of Ethernet exchange.The source sends message to having selecting your destination of udp port in its corresponding IP address from its IP address via its udp port.
In another kind of implementation 344, channel utilizes local Infiniband host-host protocol to set up by the Infiniband interconnection, and wherein the Infiniband framework is a physical medium.In this implementation, channel is based on node, and the communication between the source and destination is to utilize their node addresss separately based on node.In another implementation 346, channel is based on storer, RDMA (long-range direct memory visit) for example, and be known as direct connection (DC) here.Utilize such channel, message is sent straight to the storer of destination machine from source machine, handles the message of dealing with from NIC to the application memory space thereby walk around CPU, and may avoid message is packaged into the network overhead of network packet.
As for native protocol, a kind of method is utilized aforementioned local Tervela
TMMessaging protocol.Conceptive, Tervela
TMMessaging protocol and IP-based protocol class are seemingly.Each message comprises message header and message payload.Message header comprises a plurality of fields, and one of them field is used for topic information, and described topic information indication is used for ordering the topic in Sharing Information territory by the client.
Fig. 4 shows a possible message format based on topic.As shown in the figure, message comprises head 370 and main body 372 and 374, and main body 372 and 374 comprises useful load.Show two class message, that is, data and administrative messag, this two classes message has different message bodies and PT Payload Type.Head comprises the field that is used for following content: source and destination NameSpace sign, source and destination session identification, topic sequence number and wish timestamp, in addition, it also comprises topic comment field (this field preferably variable-length).Topic can be defined as the character string based on mark, and for example, NYSE.RTF.IBM 376, and this character string is the topic character string that comprises the message of IBM stock real-time price quotations.
In an implementation, the topic information in the message may be encoded or be mapped to a key word, and key word can be one or more round valuess.Then, each topic can be mapped to a unique key word, and the mapping database between topic and the key word will be by P﹠amp; M system maintenance, and be updated to all MA by circuit.As a result, when API ordered or issues a topic, MA can return unique key word of the association of the topic field that is used for message.
Preferably, subscription format will be followed the form identical with message topic.But, subscription format also support with the asterisk wildcard of any topic substring coupling and with topic substring regular expression matched pattern.Asterisk wildcard can depend on P﹠amp to the mapping of actual topics; The M system is perhaps handled by MA according to the complexity of asterisk wildcard or pattern match request.
For example, pattern match can be followed for example following rule:
Example #1: have asterisk wildcard T1.
*.T3.T4 character string will be mated with T1.T2a.T3.T4, T1.T2b.T3.T4, but not mate with T1.T2.T3.T4.T5
Example #2: have asterisk wildcard T1.
*.T3.T4.
*Character string will be and T1.T2a.T3.T4, T1.T2b.T3.T4 coupling, but mate with T1.T2.T3.T4.T5
Example #3: have asterisk wildcard T1.
*.T3.T4.[
*] (the 5th element is optional) character string will with T1.T2a.T3.T4, T1.T2b.T3.T4 and T1.T2.T3.T4.T5 coupling, but do not mate with T1.T2.T3.T4.T5.T6
Example #4: have asterisk wildcard T1.T2
*.T3.T4 character string will be mated with T1.T2a.T3.T4, T1.T2b.T3.T4, but not mate with T1.T5a.T3.T4
Example #5: have asterisk wildcard T1.
*.T3.T4.>(the ending element of any number) character string will be mated with T1.T2a.T3.T4, T1.T2b.T3.T4, T1.T2.T3.T4.T5 and T1.T2.T3.T4.T5.T6
Fig. 5 shows the message Route Selection based on topic, and wherein topic is generally defined as the character string based on mark (token), for example, T1.T2.T3.T4, wherein T1, T2, T3 and T4 are the character strings of variable-length.As seen, the message that enters with specific topics note 4 00 is routed to communication channel 404 selectively, and Route Selection is determined to be based on, and routing table 402 makes.Topic subscription arrives the mapping definition route of channel, and is used for message is transmitted all over whole publish/subscribe network.All these routes are ordered the superset definition routing table with the mapping of interchannel in other words.Routing table also is known as Order Entry Form.The Order Entry Form that is used to utilize topic based on character string to carry out Route Selection can be configured in many ways, but preferred disposition is for to be optimized its size and Route Selection seek rate.In one implementation, Order Entry Form can be defined as the dynamic hashing graph structure, and in another kind of implementation, Order Entry Form can be disposed in the tree construction, shown in the figure among Fig. 5.
Tree comprises node (for example, the T that is connected by the limit
1..., T
10), wherein each substring of topic subscription is corresponding to a node in the tree.The channel that is mapped to given order is stored on the leaf node of order, each leaf node indicate this topic subscription from the tabulation (that is, receiving subscription request) of channel by it.Which channel of this tabulation indication should receive the copy of the message of its topic note and this order coupling.As shown in the figure, the message Route Selection is searched message topic as input, utilizes each substring of this topic that tree is resolved then, locatees and enters the different channels that message topic is associated.For example, T
1, T
2, T
3, T
4And T
5Be directed to channel 1,2 and 3; T
1, T
2And T
3Be directed to channel 4; T
1, T
6, T
7, T
*And T
9Be directed to channel 4 and 5; T
1, T
6, T
7, T
8And T
9Be directed to channel 1; And T
1, T
6, T
7, T
*And T
10Be directed to channel 5.
Although the selection to the structure of route option table is to be optimized searching of route option table, the performance of searching also depends on the searching algorithm that is used to find and enter one or more topic subscription of message topic coupling.Therefore, the Route Selection list structure should adapt to this algorithm, and vice versa.A kind of mode that reduces the size of routing table is to allow route selection algorithm will order selectively to propagate all over whole publish/subscribe network.For example, it seems it is the subclass (for example, the part of whole character string) of another order of being propagated, then to need not to propagate this subclass and order, because MA has had the information of the superset of this order if order.
Based on aforementioned, preferred message routing protocol is based on the routing protocol of topic, wherein authorizes in the mapping between subscriber and corresponding topic to indicate.Mandate is at each subscriber's appointment, indicates this order to have the right to consume which kind of message or this publisher can produce (issue) which message.These mandates are at P﹠amp; Define in the M machine, be transferred to all MA in the publish/subscribe network, be used for creating and upgrading their routing table then by MA.
Each MA is inserted into (request is ordered) and upgrades its routing table in which kind of message by following the trail of whom.But before adding route to its routing table, MA must check order at the mandate of publish/subscribe network.The MA checking may be neighbours MA, P﹠amp; The order entity of M system, CE or API is authorized to execution like this.If this order is that effectively then route will be created and be added to routing table.Then, because some mandates may be known in advance, so system can be by arrangement with the predefine mandate, and these mandates can be loaded by field when guiding.For example, some the cura specialis message such as config update may always be forwarded all over network, and therefore are written into automatically when starting.
Under situation about having provided to the foregoing description of messaging system with publish/subscribe middleware architecture, can understand, when the message transmission that processing is used to use, intelligent messaging application programming interface (abbreviating API as at this) has important role in this system.Application-dependent is carried out all message transmission in API, comprises registering, issuing and order.Registration comprises that a management register requirement sends to one or more MA, and described one or more MA confirm that the mandate of API and application is so that register.In case their registration is identified, then use the information that to order and to issue about any topic that is authorized to.Correspondingly, we turn to now the details according to the API of principle configuration of the present invention are described.Fig. 6 is the block diagram that illustrates API.
In this diagram, API is the combination of API communication engines 602 and API counterfoil (stub) 604.Communication engines 602 be usually said under operating system, move be used for the program that the process computer system wishes the periodic service request that receives; But in some cases, it is embedded in the middle of the application itself and thereby is in-process communication bus.The communication engines program is transmitted to other suitable procedure (or process) with request.In this case, the API communication engines plays the gateway between application and the publish/subscribe middleware system.As referred, the API communication engines manage by the number of dynamically selecting host-host protocol and dynamically regulating the message that will encapsulate in the single frame and MA between application communication.The number that is encapsulated in the message in the single frame depends on a plurality of factors, for example message rate and the system resource utilization in MA and the API main frame.
Use and use API counterfoil 604 to communicate with the API communication engines.Usually, use the application program of remote procedure call (RPC) to compile with counterfoil, this counterfoil uses the remote process of being asked to replace this program.Counterfoil is accepted RPC and it is transmitted to remote process, and remote process returns the result to counterfoil when finishing, the result is passed to the program of carrying out RPC.In some cases, communicating by letter between API counterfoil and the API communication engines finished via the interprocess communication bus, and described interprocess communication bus is to use and realizes such as socket or the such mechanism of shared storage.The API counterfoil can obtain with various programming languages, and these programming languages comprise C, C++, Java and .NET.API itself can obtain with multilingual is complete, and it can move on different operating system, comprises the Windows of Microsoft
TM, Linux
TMAnd Solaris
TM
The API communication engines is based upon such as on the such logical layer of message transport layer 610.With different with the directly mutual MA of physical medium interface, the API majority is implemented on the operating system and its message transport layer communicates via OS.In order to support dissimilar channels, OS gives tacit consent to unsupported every kind of physical medium for it may need particular driver.OS also may require the user to insert specific physical medium card.For example, need the special interface card OS driver relevant, on channel, send message to allow message transport layer with it such as direct connection (DC) or the such physical medium of Infiniband.
Transfer source layer 612 among the API also is similar to the transfer source layer among the MA a little.Yet main difference is to enter message advances along different paths respectively in API and MA.In API, data-message is sent to use sends routing engine 614 (schema constrains still less) and administrative messag is sent to administrative messag layer 616.Be mapped to and order but not channel is mapped to the application except using (606), use the behavior of sending routing engine and be similar to message Route Selection engine 618.Therefore, when entering message arrival, application is sent routing engine and is searched all order application, sends to all these order application with the copy of this message or to quoting of this message then.
In some implementations, application is sent routing engine and is responsible for (late schemabinding) feature of plan binding late.As previously mentioned, local (Tervela for example
TM) messaging protocol provides information with original and compressed format, this form does not comprise the structure and the definition of bottom data.As a result, thus message capacity and handling capacity that messaging system advantageously reduces its bandwidth usage and allow to increase.When API received data-message, API was tied to its plan with raw data, thereby allowed to use visit information pellucidly.Plan comes the content structure of definition message by mapping between the deviation post in source body of domain name, field type and territory is provided.Therefore, application can be asked certain domain name and be need not to know the position of this territory in message, and API uses skew to locate this information and it is returned to application.In one implementation, when application request from/when MA ordered or issues, plan was provided by MA.
To a great extent, outbound message follow with MA in identical output logic.Really, API can equally with MA have protocol optimization service (POS) 620.Yet the publish/subscribe middleware system is to be configured with the POS that is distributed between MA and the API communication engines according to the configuration based on the principal and subordinate.Yet different with the POS that decides the timing changing channel configuration among the MA in its sole discretion, the POS among the API serves as the subordinate of the main POS among its MA that is linked to.Main POS and all the consumption plan that system and Internet resources disappear is in time monitored from POS.Whole, the subclass of these resource consumption plans or gather and send main POS to, and main POS determines how message is delivered to the API communication engines based on these plans, comprises by selecting host-host protocol that message is delivered to the API communication engines from POS.For example, the host-host protocol of selecting in the middle of clean culture, multicast or the broadcast message transmission agreement always is not fit to environment.Therefore, when the POS on MA decision changes channel configuration, its remotely control the API place from POS.
Transmit when carrying out its role in the publish/subscribe middleware system in message, API is preferably transparent to using, and this is because it makes the system resource utilization that is used to handle application request minimize.In a kind of configuration, API is by carrying out the number that zero-copy message sink (promptly omitting the copy of the application memory space of the message that receives from network) comes the optimize storage copy.For example, the API communication engines will cushion (storage space) and introduce network interface unit, will enter the write direct storage space of API communication engines of message.These message become and can conduct interviews via shared storage application.Similarly, API carries out from the storage space of using and directly transmits to the zero-copy message of network.
In the another kind configuration, API makes the CPU treatment capacity that is used to carry out message sink and send required by task reduce.For example, the API communication engines is carried out message sink and transmission task in batches, rather than once receives or the transmission a piece of news, thereby makes CPU handle the decreased number in cycle.This batch transmission of messages often relates to message queue.Therefore, minimize in order to make the end-to-end stand-by period, transmission of messages need be the time restriction that keeps message queueing less than acceptable stand-by period threshold value in batches.
In order to keep the above-mentioned transparency, API is to being handled by the message of using issue or order.Thereby system bandwidth is used and the increase throughput of system in order to reduce, and transmits information with original and compressed format.Therefore, when API received data-message, API was tied to its plan with raw data, thereby allowed to use visit information pellucidly.Plan comes the content structure of definition message by mapping between the deviation post in source body of domain name, field type and territory is provided.As a result, application can be asked certain domain name and be need not to know the position of this territory in message, and API uses skew to locate this information and it is returned to application.Attach and mention that in order more effectively to utilize bandwidth, application can be ordered a topic, wherein its request only receives from the information after the renewal of message flow.Because this order, MA compares new information and the message of before having sent and only renewal is distributed to application.
Another kind of realization provides the ability that presents data that receive or issue between application and the API with the form of deciding through consultation in advance ordering.This content conversion is by presenting that engine is carried out and presenting form based on the data that provided by application.Data present form can be defined as mapping between bottom data plan and the application data form.For example, application can and be used data with the issue of XML form, and API will change between this XML form and bottom message format back and forth.
API also is designed to Real-time Channel optimization.Specifically, communicating by letter between MA and the API communication engines carried out on one or more channel, wherein every Channel Transmission and one or more order or issue corresponding message.MA and API communication engines be the every paths in the monitor communication path and dynamically optimize available resources constantly.This is to issue and order must finishing with the system resource of expection of application by making the processing expenditure relevant with the data publish/subscribe minimize and be preserved for.
In one implementation, the API communication engines has enabled Real-time Channel message flow control feature, is used to prevent that one or more application from using up the available system resource.This message flow control feature is to be controlled by the QoS (service quality) that orders.For example, for the last time given value or the type QoS that does one's best, it is usually more important than handling more data inferior to handle less high-quality data.For example, if the quality of data is measured by its timeliness, then only handling up-to-date information may be better.In addition, the API communication engines is notified to MA with the current state of channel queue, rather than waiting list overflows and a burden of processing legacy data is left application for and lost latest data.
Fig. 7 illustrates the influence of real-time messages current control (MFC) algorithm.According to this algorithm, the size of channel queue can be used as threshold parameter work.For example, the message of sending by particular channel is accumulated in its channel queue in the receiving equipment side, and along with its size of growth of this channel queue may reach a high threshold, in case its size surpasses this high threshold, channel just possibly can't be caught up with and be entered message flow.When near this situation (wherein channel is in the danger that reaches its max cap.), receive message transmission device and before channel queue is overflowed, can activate MFC.Dwindle and its size when becoming less than a low threshold value when formation, MFC is closed.Difference between the high and low threshold value is set as and enough is used for producing so-called sluggish behavior, and wherein the queue size value place that the queue size value of MFC when being closed than it is higher is unlocked.This threshold difference has been avoided when queue size is paced up and down around the high threshold otherwise the frequent switch vibration of the message flow control that will take place.Therefore, the formation of transmitting recipient's side for fear of message is overflowed, and can the speed that enter message be remained in the control with real-time, the dynamic MFC that speed is remained under the maximum channel capacity.
As lose during in channel queue near its capacity message based on the substituting of the MFC algorithm of sluggishness, Real-time and Dynamic MFC can operate and be used for blended data or use certain merge algorithm to ordering formation.Yet,, send out the path rather than stay on the fast forward-path so MA may retreat into slow-speed because this operation may need other message transformation.This will be avoided message transformation that message is transmitted handling capacity and produce negative influence.Other message transformation is by carrying out with the similar processor of protocol translation engine.The example of sort processor comprises independent micro engine on NPU (network processing unit), semantic processor, the MA or the like.
For higher efficient, the Message Processing of merging in real time or order level can be dispensed between sender and the recipient.For example, under the situation that the Message Processing of ordering level is only asked by a subscriber, with it is compared in the execution of sender's side, it is rational pushing it down to recipient's side.Yet, if data just asking identical order level Message Processing more than a consumer, carrying out it in sender's side of upstream will be more reasonable.The purpose of load of sharing out the work between the sender of channel and recipient's side is optimally to use available combined processing resources.
When channel was encapsulated in a plurality of message in the single frame, it can handle resource and the Messages-Waiting time is remained on the pressure that maximum can be accepted under the stand-by period and alleviate receiver side by discharging some.The big frame that receives still less is more effective than handling many little frames sometimes.For comprising for the API that moves on the typical OS of general computer hardware component of CPU, storer and NIC especially like this in use.Typical NIC is designed to that each frame that receives is generated OS and interrupts, thus this reduced API message be delivered to order use processing time of available application level.
Further illustrate as Fig. 7, if the present level of channel queue has been crossed max-thresholds, then MA suppresses the message rate on this particular channel, returns steady state (SS) to reduce the load on the API communication engines and to allow to use.In this pressing process, depend on the order quality of service, will make latest news have precedence over old message.If the normal load level is got back in formation, then API can notify the control of MA forbidden channel message flow.
In a mutation of above-mentioned realization, message flow control feature is to realize in the API side that message is transmitted routed path (going/come self-application).Whenever message need be delivered to when order using, the API communication engines just can make a determination, to abandon message in the mode that helps following updating message under the situation of the order quality permission of service.
In a word, in API or in MA, message flow control can be used different compacting strategies, and wherein API communication engines or the MA that is connected to this API communication engines can carry out based on the data of ordering and merge (also being called data mixing), rather than helps new information and abandon old message.In other words, the data that abandon are not lost fully but are mixed with latest data.In one embodiment, can between given API and its MA,, all channels neutralize strategy for defining this message flow control globally, and can be according to P﹠amp; The M system is with the service quality of this policy configurations for merging.This QoS will be applied to order all application of this merging QoS.In another embodiment, this compacting strategy can call self-defined via the api function that comes self-application, thereby some dirigibilities are provided.In particular cases this, the API communication engines is passed on the compacting strategy when setting up channel with MA.Parameter channel configuration was decided through consultation between API communication engines and MA during this stage.
Notice when this self-defined compacting strategy be to order level but not when message-level realizes, application can be when the given topic of order definition strategy.Be added into the channel configuration of this particular subscription then based on the compacting strategy of ordering.
The API communication engines can be configured to provide the increment Message Processing; The MA that API was connected to also can do like this.For the increment Message Processing, application can be ordered given order or one group of online (inline) increment Message Processing service of ordering.The message flow that this service will be performed subsequently or be applied to ordering.In addition, application can be used level message to handle language and register some false codes, be used for territory (for example NEWFIELD=(FIELD (N)+FIELD (M))/2 has defined the foundation of neofield like this at the end of message with the value of the arithmetic mean that equals territory N and M) in the quoted message.These increment Message Processing services can require the service-specific state to be held when new information is processed and upgrade.To define these states in the mode identical with field of definition, and will with false code reuse they (for example STATE (0) +=FIELD (N), this means status number 0 be FIELD (N) adding up and).These services can be defined in the system by acquiescence and be applied in when ordering specific topics and only need enable these services, and perhaps they can be self-defining.In a word, can carry out this online increment Message Processing service by API communication engines or the MA that is connected to this API.
With online increment Message Processing service class seemingly, content-based access control list (ACL) can depend on to be realized and is deployed in the API communication engines or MA goes up or is deployed on the two.For example suppose that the stock dealer may be only interesting to the message with IBM quotation when IBM price Gao Yu $50, otherwise it preferably abandons all message that the quotation that is had is lower than this value.For this reason, API (or MA) can also define content-based ACL and use the ACL that will define based on ordering.Can be to use ACL condition that the territory in the message expresses and the ACL motion combination of expressing with form or another suitable method of REJECT, ACCEPT, LOG based on the ACL that orders.The example of this ACL is: ((FIELD (n)<VALUE, ACCEPT, REJECT|LOG).
In order further to raise the efficiency, the API communication engines can be configured in the Message Processing some are given to smart message delivery network interface card (NIC).This smart message is transmitted NIC and is provided for by walk around networking I/O with the network stack of hardware complete, is used to carry out from the I/O card directly to the DMA of application memory space and is used for the administrative messag transfer reliability, comprises retransmitting and interim buffer memory.Smart message is transmitted NIC can further carry out channel management, comprises aforesaid message flow control, increment Message Processing and content-based ACL.Illustrate this smart message among Fig. 8 a and Fig. 8 b respectively and transmit two kinds of realizations of NIC.Fig. 8 a illustrates storage interconnection cards 808, and Fig. 8 b illustrates message transmission unloading card 810.These two kinds of realizations all comprise host CPU 802, mainframe memory 804 and host pci bridge 806.
As well-known, stability, availability and consistance often are necessary in enterprise's operation.For this reason, the publish/subscribe middleware system can be designed to fault-tolerant, a plurality of tolerant systems that are deployed as in its assembly.For example, it is right that MA can be deployed as fault-tolerant MA, and wherein a MA is called as main MA, and the 2nd MA is called as less important MA or fault-tolerant MA (FT MA).In addition, in order to store and transmit operation, CE (caching engine) can be connected to main or less important core/edge MA.When main or less important MA had flexible connection to CE, the whole or subclass of its route messages was transmitted to this CE, and this CE writes storage area to them and is used for keeping.In the section, these message can be used for retransmitting when being requested then at the fixed time.
An example of fault-tolerant design has been shown among Figure 10.In this example, system is based on the fault-tolerant of session.Another kind of possible configuration is that complete failure shifts (failover), but we have selected conversation-based fault-tolerant in this example.Session be defined as between two MA or MA and an API between communicate by letter.Session comprise between two MA or MA and an API between communicate by letter (for example 910) and it can be initiatively or passive.If fault takes place, then MA or API can determine session is switched to less important MA 908 from main MA 906.During when session experience connectivity and/or such as the fault of such system resource such as CPU, storer, interface, fault will take place.The connectivity problem defines according to the bottom channel.For example, IP-based channel will experience the connectivity problem when loss, delay and/or shake increase along with time anomaly ground.For the channel based on storer, the connectivity problem can wait according to the memory address conflict and define.When some connectivities and/or system resource problem were experienced in session, MA or API just decision switched to less important MA to session from main MA.
In one implementation, main and less important MA can be counted as and use some the logic channel map addresses to be arrived the single MA of physical channel address based on the logic of channel.For example, for IP-based channel, API or MA can come problematic session redirection to less important MA by the arp cache clauses and subclauses of MA logical address being updated to the physics MAC Address of pointing to less important MA.
Do not influence the advantage of all sessions when in a word, conversation-based fault-tolerant design has in having only all sessions one or subclass experience problem.In other words, when some performance issues were experienced in session, this session was moved to less important fault-tolerant (FT) MA 908 from main MA (for example 906), and did not influence other sessions relevant with this main MA 906.So for example AP1-4 is shown as the respective activity session that still has with main MA 902 (as movable MA), and AP5 has the active session with FT MA 908.
When communicating with corresponding MA, API uses the physical medium that connects via one or more article or smart message transmission unloading NIC.Figure 10 illustrates the interface of communicating by letter that is used between API and the MA.
In a word, the invention provides a kind of new method that message is transmitted of carrying out, more particularly, provide a kind of new publish/subscribe middleware system with intelligent messaging application programming interface.Though described in detail the present invention with reference to some preferred version of the present invention, other version also is possible.Therefore, the spirit and scope of the appended claims description to preferred version that should be not limited to here to be comprised.
Claims (36)
- One kind be used to use with the publish/subscribe middleware system between the application programming interface of communicating by letter, comprising:Communication engines, its be configured to serve as be used to use with the publish/subscribe middleware system with described communication engines between the gateway of communicating by letter, the operation of wherein said communication engines is transparent for application, is used to use the message transmission protocol of Dynamic Selection and is used for transmission channel resources and stream monitors in real time and dynamically control;One or more counterfoils are used for communicating by letter between described application and the described communication engines; AndBus is used for communicating by letter between described one or more counterfoil and the described communication engines.
- 2. application programming interface as claimed in claim 1, wherein, described bus is interprocess communication bus or in-process communication bus.
- 3. application programming interface as claimed in claim 1, wherein, described communication engines is also operated the number that is used for dynamically regulating the message that is encapsulated in a frame.
- 4. application programming interface as claimed in claim 1, wherein, described communication engines also operate be used for conversation-based fault-tolerant.
- 5. application programming interface as claimed in claim 1, wherein, described communication engines is also operated the interim buffer memory that is used for message.
- 6. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used to the Message Processing of rising in value.
- 7. application programming interface as claimed in claim 6, wherein, described increment Message Processing comprises the deployment of content-based Access Control List (ACL), each clauses and subclauses in the wherein said tabulation are associated with an access consideration and action.
- 8. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used for registering and becoming to the message transmission device of described publish/subscribe middleware system being connected to described message transmission device in logic.
- 9. application programming interface as claimed in claim 8, wherein, daily record is charged in request in described registration and order is based on topic, and wherein topic has defined described application programming interface it has been had the share and access territory of publish/subscribe mandate.
- 10. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used for plan binding late.
- 11. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used for the issue of part message.
- 12. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used for described application stored message is carried out direct memory visit.
- 13. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used for handling message transmission in batches.
- 14. application programming interface as claimed in claim 12 wherein, is handled the transmission of described batch message and comprised message queueing, described message queueing has restriction to avoid formation and overflow and to communicate by letter the stand-by period.
- 15. application programming interface as claimed in claim 1, wherein, described real-time messages transfer resource and current control are used following strategy: perhaps discern and ignore old message, message is mixed.
- 16. application programming interface as claimed in claim 15, wherein, described strategy is applied to all message path relevant with described application programming interface globally.
- 17. application programming interface as claimed in claim 15, wherein, described strategy is user-defined.
- 18. application programming interface as claimed in claim 15, wherein, described strategy is defined and realizes when applications subscribe.
- 19. application programming interface as claimed in claim 1, wherein, described communication engines is also operated and is used for also described raw data being tied to its plan with original compression data format analysis processing message.
- 20. application programming interface as claimed in claim 6, wherein, described increment Message Processing is being used the period of registration definition.
- 21. application programming interface as claimed in claim 1, wherein, described communication engines is also operated the Message Processing that is used to be unloaded to interface card.
- 22. application programming interface as claimed in claim 1, wherein, described publish/subscribe middleware system comprises message transmission device, and wherein optimize with the configuration sharing agreement based on the principal and subordinate between described message transmission device and described application programming interface, wherein said application programming interface conduct is from the side.
- 23. application programming interface as claimed in claim 2 wherein, is to use socket or shared storage to realize if described interprocess communication bus is used, and if described in-process communication bus be used then be to use function call to realize.
- 24. one kind be used to use with the publish/subscribe middleware system between the application programming interface of communicating by letter, comprising:Communication engines, its be configured to serve as be used to use with the publish/subscribe middleware system between the gateway of communicating by letter, described communication engines has the logical layer that comprises message layer and message transport layer, wherein said message layer comprises using sends routing engine, administrative messag layer and message Route Selection engine, and wherein said message transport layer comprises the channel management part, and described channel management partly is used for making the transmission path that is used for controlling in real time by the message of described message layer processing based on system resource;One or more counterfoils are used for communicating by letter between described application and the described communication engines; AndBus is used for communicating by letter between described one or more counterfoil and the described communication engines.
- 25. application programming interface as claimed in claim 24, wherein, described communication engines is deployed on the operating system.
- 26. application programming interface as claimed in claim 24, wherein, described operating system comprises the driver that is used for interface card, wherein said channel management part by described interface card be used for to being connected with physical medium interface from described application transport message.
- 27. application programming interface as claimed in claim 26, wherein, described interface card is that operation is used for memory interconnect or is used for the network interface unit that Message Processing unloads.
- 28. application programming interface as claimed in claim 26, wherein, described interface card comprises direct memory visit and the buffer memory that hardware based networking I/O (I/O) storehouse and operation are used to transmit.
- 29. application programming interface as claimed in claim 24, wherein, described message Route Selection engine comprises host-host protocol optimization service.
- 30. application programming interface as claimed in claim 24, wherein, described application is sent the routing engine operation and is used for application is mapped to topic subscription.
- 31. application programming interface as claimed in claim 24, wherein, described channel management is partly controlled a plurality of channels and described application and is sent routing engine and based on described mapping message is delivered to application.
- 32. application programming interface as claimed in claim 30, wherein, described administrative messag layer is handled administrative messag and described route and application and is sent routing engine deal with data message.
- 33. application programming interface as claimed in claim 23, wherein, described communication engines and described one or more counterfoil are compiled and are linked to following application, and described application uses described application programming interface to come to communicate with described publish/subscribe middleware system.
- 34. application programming interface as claimed in claim 23, wherein, described communication engines is also operated and is used for slow binding plan.
- 35. application programming interface as claimed in claim 34, wherein, described application is sent the routing engine operation and is used for plan is tied to the origination message data, thereby allows described application access message information pellucidly.
- 36. application programming interface as claimed in claim 1 also comprises presenting engine, its operation be used to make commute described application enter message and outbound message is changed between application data form and the plan of message Data transmission.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US64198805P | 2005-01-06 | 2005-01-06 | |
US60/641,988 | 2005-01-06 | ||
US60/688,983 | 2005-06-08 | ||
US11/316,778 | 2005-12-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101326508A true CN101326508A (en) | 2008-12-17 |
Family
ID=39086087
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2005800460930A Pending CN101326508A (en) | 2005-01-06 | 2005-12-23 | Intelligent messaging application programming interface |
CNA200580046095XA Pending CN101124567A (en) | 2005-01-06 | 2005-12-23 | Caching engine in a messaging system |
CNA2005800460945A Pending CN101124566A (en) | 2005-01-06 | 2005-12-23 | End-to-end publish/subscribe intermediate system structure |
CNA2005800461011A Pending CN101133380A (en) | 2005-01-06 | 2005-12-23 | Message transmission device based on hardware |
CNA2006800018954A Pending CN101151604A (en) | 2005-01-06 | 2006-01-06 | Provisioning and management in a message publish/subscribe system |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA200580046095XA Pending CN101124567A (en) | 2005-01-06 | 2005-12-23 | Caching engine in a messaging system |
CNA2005800460945A Pending CN101124566A (en) | 2005-01-06 | 2005-12-23 | End-to-end publish/subscribe intermediate system structure |
CNA2005800461011A Pending CN101133380A (en) | 2005-01-06 | 2005-12-23 | Message transmission device based on hardware |
CNA2006800018954A Pending CN101151604A (en) | 2005-01-06 | 2006-01-06 | Provisioning and management in a message publish/subscribe system |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN101326508A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103677549A (en) * | 2012-09-11 | 2014-03-26 | 阿里巴巴集团控股有限公司 | Data processing method and device |
CN104935625A (en) * | 2014-03-18 | 2015-09-23 | 安讯士有限公司 | Method and system for finding services in a service-oriented architecture (SOA) network |
CN106210101A (en) * | 2016-07-20 | 2016-12-07 | 上海携程商务有限公司 | Message management system and information management method |
CN107819734A (en) * | 2016-09-14 | 2018-03-20 | 上海福赛特机器人有限公司 | The means of communication and communication system between a kind of program based on web socket |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011029821A (en) * | 2009-07-23 | 2011-02-10 | Canon Inc | Information processing apparatus, control method of the information processing apparatus, and control program for the information processing apparatus |
US8452835B2 (en) * | 2009-12-23 | 2013-05-28 | Citrix Systems, Inc. | Systems and methods for object rate limiting in multi-core system |
EP2367309B1 (en) | 2010-02-10 | 2016-07-13 | Alcatel Lucent | Method for detecting a synchronization failure of a transparent clock and related protection schemes |
US20120135676A1 (en) * | 2010-11-26 | 2012-05-31 | Industrial Technology Research Institute | System and method for deployment and management of interactive regional broadcast services |
WO2013152312A1 (en) * | 2012-04-06 | 2013-10-10 | Interdigital Patent Holdings, Inc. | Optimization of peer-to-peer content delivery service |
US9641635B2 (en) * | 2012-08-28 | 2017-05-02 | Tata Consultancy Services Limited | Dynamic selection of reliability of publishing data |
US9736226B2 (en) * | 2012-10-23 | 2017-08-15 | Nec Corporation | Rule distribution server, event processing system and method, and program |
CN103534988B (en) * | 2013-06-03 | 2017-04-12 | 华为技术有限公司 | Publish and subscribe messaging method and apparatus |
CN104579605B (en) * | 2013-10-23 | 2018-04-10 | 华为技术有限公司 | A kind of data transmission method and device |
CN104618466A (en) * | 2015-01-20 | 2015-05-13 | 上海交通大学 | System for balancing load and controlling overload based on message transfer and control method of system |
CN105991579B (en) * | 2015-02-12 | 2019-05-28 | 华为技术有限公司 | Method for sending information, related network device and system |
US9407585B1 (en) * | 2015-08-07 | 2016-08-02 | Machine Zone, Inc. | Scalable, real-time messaging system |
CN107306248B (en) * | 2016-04-19 | 2023-04-28 | 广东国盾量子科技有限公司 | Optical quantum switch and communication method thereof |
US9608928B1 (en) * | 2016-07-06 | 2017-03-28 | Machine Zone, Inc. | Multiple-speed message channel of messaging system |
WO2018169083A1 (en) * | 2017-03-16 | 2018-09-20 | ソフトバンク株式会社 | Relay device and program |
CN108390917B (en) * | 2018-01-25 | 2021-02-02 | 珠海金山网络游戏科技有限公司 | Intelligent message sending method and device |
US11212218B2 (en) * | 2018-08-09 | 2021-12-28 | Tata Consultancy Services Limited | Method and system for message based communication and failure recovery for FPGA middleware framework |
TWI678087B (en) * | 2018-11-22 | 2019-11-21 | 財團法人工業技術研究院 | Method of message synchronization in message queue publish and subscriotion and system thereof |
EP3767922B1 (en) * | 2019-07-17 | 2023-11-08 | ABB Schweiz AG | Method of channel mapping in an industrial process control system |
CN110532113B (en) * | 2019-08-30 | 2023-03-24 | 北京地平线机器人技术研发有限公司 | Information processing method and device, computer readable storage medium and electronic equipment |
CN112817779B (en) * | 2021-01-29 | 2024-08-20 | 京东方科技集团股份有限公司 | Modularized application program communication method, device, equipment and medium |
EP4064638B1 (en) * | 2021-03-23 | 2024-06-19 | ABB Schweiz AG | Highly available delivery of ordered streams of iot messages |
CN114827307B (en) * | 2022-04-14 | 2024-04-19 | 中国建设银行股份有限公司 | Data sharing method, system and server based on multiple data systems |
CN115086403A (en) * | 2022-04-27 | 2022-09-20 | 中国科学院上海微系统与信息技术研究所 | Edge computing gateway micro-service architecture for ubiquitous heterogeneous access |
-
2005
- 2005-12-23 CN CNA2005800460930A patent/CN101326508A/en active Pending
- 2005-12-23 CN CNA200580046095XA patent/CN101124567A/en active Pending
- 2005-12-23 CN CNA2005800460945A patent/CN101124566A/en active Pending
- 2005-12-23 CN CNA2005800461011A patent/CN101133380A/en active Pending
-
2006
- 2006-01-06 CN CNA2006800018954A patent/CN101151604A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103677549A (en) * | 2012-09-11 | 2014-03-26 | 阿里巴巴集团控股有限公司 | Data processing method and device |
CN104935625A (en) * | 2014-03-18 | 2015-09-23 | 安讯士有限公司 | Method and system for finding services in a service-oriented architecture (SOA) network |
CN106210101A (en) * | 2016-07-20 | 2016-12-07 | 上海携程商务有限公司 | Message management system and information management method |
CN106210101B (en) * | 2016-07-20 | 2019-06-18 | 上海携程商务有限公司 | Message management system and information management method |
CN107819734A (en) * | 2016-09-14 | 2018-03-20 | 上海福赛特机器人有限公司 | The means of communication and communication system between a kind of program based on web socket |
Also Published As
Publication number | Publication date |
---|---|
CN101124567A (en) | 2008-02-13 |
CN101133380A (en) | 2008-02-27 |
CN101151604A (en) | 2008-03-26 |
CN101124566A (en) | 2008-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101326508A (en) | Intelligent messaging application programming interface | |
US9253243B2 (en) | Systems and methods for network virtualization | |
CA2594267C (en) | End-to-end publish/subscribe middleware architecture | |
US10367852B2 (en) | Multiplexed demand signaled distributed messaging | |
US20110185082A1 (en) | Systems and methods for network virtualization | |
Wetherall et al. | Introducing new internet services: Why and how | |
US7039671B2 (en) | Dynamically routing messages between software application programs using named routing nodes and named message queues | |
CN108881369A (en) | A kind of method for interchanging data and cloud message-oriented middleware system of the cloud message-oriented middleware based on data-oriented content | |
CN114418574A (en) | Consensus and resource transmission method, device and storage medium | |
Alkhawaja et al. | Message oriented middleware with QoS support for smart grids | |
US20070005800A1 (en) | Methods, apparatus, and computer programs for differentiating between alias instances of a resource | |
Delamer et al. | Ubiquitous communication systems for the electronics production industry: Extending the CAMX framework | |
Geetha et al. | Optimized Scheduling Algorithm for Energy-Efficient Wireless Network Transmissions. | |
Aramudhan | LDMA: Load Balancing Using Decision Making Decentralized Mobile Agents | |
Guo et al. | A three-layer network management system | |
An et al. | Poster: A Cloud-enabled Coordination Service for Internet-scale OMG DDS Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1125198 Country of ref document: HK |
|
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Open date: 20081217 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1125198 Country of ref document: HK |