WO2014039895A1 - System and method for supporting message pre-processing in a distributed data grid cluster - Google Patents
System and method for supporting message pre-processing in a distributed data grid cluster Download PDFInfo
- Publication number
- WO2014039895A1 WO2014039895A1 PCT/US2013/058610 US2013058610W WO2014039895A1 WO 2014039895 A1 WO2014039895 A1 WO 2014039895A1 US 2013058610 W US2013058610 W US 2013058610W WO 2014039895 A1 WO2014039895 A1 WO 2014039895A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- thread
- message
- data grid
- distributed data
- incoming
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/547—Messaging middleware
Definitions
- the present invention is generally related to computer systems, and is particularly related to a distributed data grid. Background:
- the system can associate a message bus with a service thread on a cluster member in the distributed data grid. Furthermore, the system can receive one or more incoming messages at the message bus using an input/output (I/O) thread, and pre-process said one or more incoming messages on the I/O thread before each said incoming message is delivered to a service thread in the distributed data grid.
- I/O input/output
- the system can take advantage of a pool of input/output (10) threads to deserialize inbound messages before they are delivered to the addressed service, and can relieve the bottleneck that is caused by performing all message deserialization in a single threaded fashion before the message type can be identified and offloaded to the thread-pool within the distributed data grid.
- An embodiment of the present invention provides an apparatus for supporting message pre-processing in a distributed data grid, comprising: means for associating a message bus with a service thread on a cluster member in the distributed data grid; means for receiving one or more incoming messages at the message bus using an input/output (I/O) thread; and means for pre-processing said one or more incoming messages on the I/O thread before each said incoming message is delivered to a said service thread in the distributed data grid.
- I/O input/output
- the apparatus further comprises: means for executing, via the I/O thread, a pre-process method that is associated with a said incoming message.
- the apparatus further comprises: means for deserializing each said incoming message using the I/O thread.
- the apparatus further comprises: means for handling a said incoming message completely on the I/O thread and avoiding using the service thread.
- the apparatus further comprises: means for processing said one or more pre-processed incoming messages on the service thread.
- the apparatus further comprises: means for sending a response to a service requester that sends a said incoming message.
- the apparatus further comprises: means for avoiding context switch that move said one or more incoming messages between the I/O thread and the service thread.
- the apparatus further comprises: means for allowing the message bus to be based on a remote direct memory access (RDMA) protocol.
- RDMA remote direct memory access
- the apparatus further comprises: means for associating a thread pool with the message bus, wherein the thread pool contains a plurality of I/O threads.
- the apparatus further comprises: means for processing said one or more incoming messages on the plurality of I/O threads in parallel.
- Another embodiment of the present invention provides a system for message pre-processing in a distributed data grid, comprising: a first associating unit configured to associate a message bus with a service thread on a cluster member in the distributed data grid; a receiving unit configured to receive one or more incoming messages at the message bus using an input/output (I/O) thread; and a pre-processing unit configured to pre-process said one or more incoming messages on the I/O thread before each said incoming message is delivered to a said service thread in the distributed data grid
- system further comprises: an executing unit configured to execute, via the I/O thread, a pre-process method that is associated with a said incoming message.
- system further comprises: a deserializing unit configured to deserialize each said incoming message using the I/O thread.
- a handling unit configured to handle a said incoming message completely on the I/O thread and avoid using the service thread.
- system further comprises: a first processing unit configured to process said one or more pre-processed incoming messages on the service thread.
- system further comprises: a sending unit configured to send a response to a service requester that sends a said incoming message.
- system further comprises: a context switch avoiding unit configured to avoid context switch that move said one or more incoming messages between the I/O thread and the service thread.
- the message bus is based on a remote direct memory access (RDMA) protocol.
- RDMA remote direct memory access
- system further comprises: a second associating unit configured to associate a thread pool with the message bus, wherein the thread pool contains a plurality of I/O threads.
- system further comprises: a second processing unit configured to process said one or more incoming messages on the plurality of I/O threads in parallel.
- Figure 1 shows an illustration of supporting message transport based on a datagram layer in a distributed data grid.
- Figure 2 shows an illustration of providing a message bus in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 3 shows an illustration of using a TCP/IP based transport layer to support messaging in a distributed data grid.
- Figure 4 shows an illustration of using a RDMA based transport layer to support messaging in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 5 shows an illustration of supporting bus per service in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 6 illustrates an exemplary flow chart for supporting bus per service in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 7 shows an illustration of supporting parallel message deserialization in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 8 illustrates an exemplary flow chart for supporting parallel message deserialization in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 9 shows an illustration of supporting message pre-processing in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 10 illustrates an exemplary flow chart for supporting message pre-processing in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 11 illustrates a schematic functional block diagram of a system in accordance with an embodiment of the invention.
- Figure 12 illustrates a functional block diagram of a system for message pre-processing in a distributed data grid in accordance with an embodiment of the present invention.
- the scalable message bus can provide each service with its own bus (transport engine).
- the distributed data grid can take advantage of a pool of input/output (I/O) threads to deserialize inbound messages before they are delivered to the addressed service, and can relieve the bottleneck that is caused by performing all message deserialization in a single threaded fashion before the message type can be identified and offloaded to the thread-pool within the distributed data grid. Additionally, the distributed data grid allows incoming messages to be pre-processed on the I/O thread for the scalable message bus.
- a “data grid cluster”, or “data grid” is a system comprising a plurality of computer servers which work together to manage information and related operations, such as computations, within a distributed or clustered environment.
- the data grid cluster can be used to manage application objects and data that are shared across the servers.
- a data grid cluster should have low response time, high throughput, predictable scalability, continuous availability and information reliability. As a result of these capabilities, data grid clusters are well suited for use in computational intensive, stateful middle-tier applications.
- Some examples of data grid clusters can store the information in-memory to achieve higher performance, and can employ redundancy in keeping copies of that information synchronized across multiple servers, thus ensuring resiliency of the system and the availability of the data in the event of server failure.
- Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol.
- An in-memory data grid can provide the data storage and management capabilities by distributing data over a number of servers working together.
- the data grid can be middleware that runs in the same tier as an application server or within an application server. It can provide management and processing of data and can also push the processing to where the data is located in the grid.
- the in-memory data grid can eliminate single points of failure by automatically and transparently failing over and redistributing its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it can automatically join the cluster and services can be failed back over to it, transparently redistributing the cluster load.
- the data grid can also include network-level fault tolerance features and transparent soft re-start capability.
- the functionality of a data grid cluster is based on using different cluster services.
- the cluster services can include root cluster services, partitioned cache services, and proxy services.
- each cluster node can participate in a number of cluster services, both in terms of providing and consuming the cluster services.
- Each cluster service has a service name that uniquely identifies the service within the data grid cluster, and a service type, which defines what the cluster service can do.
- the services can be either configured by the user, or provided by the data grid cluster as a default set of services.
- Figure 1 shows an illustration of supporting message transport based on a datagram layer in a distributed data grid.
- a cluster member 101 in a distributed data grid 100 can include one or more client/service threads 102.
- the client/service threads 102 on the cluster member 101 can send a message to other cluster members in the distributed data grid 100 through a network, e.g. an Ethernet network 1 10, using a user datagram protocol (UDP).
- a network e.g. an Ethernet network 1 10
- UDP user datagram protocol
- the cluster member 101 can employ different logics, such as packetization logic, packet retransmission logic, and Ack/Nack logic, for sending a message to another cluster member in the distributed data grid 100 and receiving a response message.
- packetization logic packetization logic
- packet retransmission logic packet retransmission logic
- Ack/Nack logic packet retransmission logic
- the above messaging process can involve multiple context switches.
- the client/service thread 102 can first send the message to a publisher 103. Then, the publisher 103 can forward the message to a speaker 104, which is responsible for sending the message to the network 1 10.
- the cluster member 101 in a distributed data grid 100 can receive a response message using one or more listeners 105, which can forward the received message to a receiver 106. Then, the receiver 106 can forward the received message to the client/service thread 102 and, optionally, notify the publisher 103.
- a scalable message bus can be used for eliminating I/O bottlenecks at various levels.
- FIG. 2 shows an illustration of providing a message bus in a distributed data grid, in accordance with an embodiment of the invention.
- a cluster member 201 can run on a virtual machine 210, e.g. a JAVA virtual machine, in a distributed data grid 200.
- the cluster member 201 can involve one or more services 21 1 , which can use one or more message buses 212 for messaging.
- the message buses 212 can be based on a binary low-level message transport layer, with multi-point addressing and reliable ordered delivery. Also, the message bus can be based on pure Java implementation and/or native implementations, and can employ an asynchronous event based programming model.
- the message bus 212 can be supported using a networking hardware and software subsystem, e.g. an Exabus in Oracle ExaLogic engineered system.
- the message bus can not only make applications running faster, and can also make the applications running more efficiently.
- the message bus can make applications running consistently and predictably, even in extremely large scale deployments with thousands of processor cores and terabytes of memory and for virtually all business applications.
- each of the message buses 212 can be a provider-based transport layer, which can be supported by using a message bus provider 202 in the virtual machine 210, such as JRockit/HotSpot.
- the message bus provider 202 can be based on a pluggable provider based framework.
- the message bus provider 202 can support different message buses such as a SocketBus, which is based on TCP/SDP, and an InfiniBus, which is based on Infiniband RDMA.
- the message bus provider can use a single switch to select from a bus protocol from a plurality of bus protocols.
- the system can specify the single switch in the following configuration file
- the system can use the single switch to select one of the following buses, such as ⁇ tmb: TCP MessageBus
- the message buses 212 can improve intra-node scalability in the distributed data grid 200, and can make the distributed data grid 200 protocol agnostic. For example, using the message buses 212, the distributed data grid 200 can effectively utilize large number of cores, improve messaging concurrency, and increase throughput and reduce latency. Also, the message buses 212 allow the distributed data grid 200 to minimize context switches and take advantage of the zero copy.
- the system can trigger death detection on the cluster member when a message bus fails.
- Figure 3 shows an illustration of using a TCP/IP based transport layer to support messaging in a distributed data grid.
- the message may need to go through an application buffer 303, a TCP/IP transport layer 305 and a kernel layer 306 on a local machine. Then, the message can be received at remote machine in an application buffer 304, via the kernel layer 306 and the TCP/IP transport layer 305 in the remote machine.
- Figure 4 shows an illustration of using a RDMA based transport layer to support messaging in a distributed data grid.
- the system can send a message from an application 401 on a local machine directly to an application 401 on a remote machine, based on the RDMA based transport layer.
- Bus per Service
- a scalable message bus can provide each service with its own bus (or transport engine).
- FIG. 5 shows an illustration of supporting a scalable message bus for various services in a distributed data grid, in accordance with an embodiment of the invention.
- a distributed data grid 500 can include multiple cluster members, e.g. cluster members 501-504.
- each cluster member can include different services, each of which can be associated with a separate message bus.
- the cluster member 501 can include partition cache services 51 1-512 and invocation service 513, which can be associated with message buses 514-516;
- the cluster member 502 can include partition cache services 521-522 and invocation service 523, which can be associated with message buses 524-526;
- the cluster member 503 can include partition cache services 531 -532 and invocation service 533, which can be associated with message buses 534-536;
- the cluster member 504 can include partition cache services 541 -542 and invocation service 543, which can be associated with message buses 544-546.
- a network 510 can connect different message buses on different cluster members 501 -504 in the distributed data grid 500.
- the network 510 can be based on a remote direct memory access (RDMA) protocol.
- RDMA remote direct memory access
- UDP user datagram protocol
- the system can use the plurality of message buses to support data transferring between different cluster members in the distributed data grid. Additionally, the system can use a datagram layer 520 to support clustering in the distributed data grid, and can bypass the datagram layer 520 in the distributed data grid for data transferring.
- the system allows an increase in CPU utilization relative to the number of services configured by the end user.
- a single transport engine can be provided per service instead of per cluster node, such that the distributed data grid can relieve the bottleneck when too many processors try to utilize a single cluster node.
- Figure 6 illustrates an exemplary flow chart for supporting message transport based on a provider-based transport layer in a distributed data grid, in accordance with an embodiment of the invention.
- the system can provide a plurality of message buses in the distributed data grid, wherein the distributed data grid includes a plurality of cluster members.
- the system can associate each service in the distributed data grid with a said message bus, and, at step 603, the system can use the plurality of message buses to support data transferring between different cluster members in the distributed data grid.
- a pool of threads can be used to provide threads, such as input/output (I/O) threads, for driving a scalable message bus to handle inbound messages in a distributed data grid, e.g. a Coherence data grid.
- I/O input/output
- a distributed data grid e.g. a Coherence data grid.
- the system can minimize the impact of the service thread bottleneck.
- Figure 7 shows an illustration of supporting parallel message deserialization in a distributed data grid, in accordance with an embodiment of the invention.
- a service thread 702 in a distributed data grid 700 can be associated with a message bus 701 , which can receive one or more incoming messages, e.g. messages 703-704.
- the message bus 701 can be associated with a thread pool 710, which contains one or more threads, e.g. I/O threads 71 1-713.
- the distributed data grid 700 can take advantage of this thread pool 710 to relieve the performance bottleneck at the service thread 702.
- the distributed data grid 700 can use multiple different I/O threads 71 1 -713 in the thread pool 710 to process the incoming messages 703-704 in parallel.
- the system can avoid the service thread bottleneck caused by performing all message deserialization in a single threaded before the message type can be identified.
- the message bus 701 can use the I/O thread 71 1 to deserialize the message 703, before delivering the incoming message 703 to the service thread 702. Also, the message bus 701 can use the I/O thread 713 to deserialize the message 704, before delivering the incoming message 704 to the service thread 702.
- RDMA direct memory access
- Figure 8 illustrates an exemplary flow chart for supporting parallel message deserialization in a distributed data grid, in accordance with an embodiment of the invention.
- the system can provide a pool of threads to provide a plurality of input/output (I/O) threads that operates to drive a scalable message bus.
- the system can receive one or more inbound messages on the plurality of IO threads, and, at step 803, the system can deserialize the one or more inbound messages on the plurality of I/O threads before delivering the one or more inbound messages to the addressed service.
- a scalable message bus can provide message pre-processing capability, which allows pre-processing the received messages, e.g. on the input/output (I/O) thread, before delivering the received messages to a service thread.
- message pre-processing capability allows pre-processing the received messages, e.g. on the input/output (I/O) thread, before delivering the received messages to a service thread.
- Figure 9 shows an illustration of supporting message pre-processing in a distributed data grid, in accordance with an embodiment of the invention.
- a service thread 902 in the distributed data grid 900 can be associated with a message bus 901.
- the message bus 901 can use one or more I/O threads, e.g. an I/O thread 903, to receive one or more incoming messages, e.g. a message 905. Additionally, the message bus 901 can use the I/O thread 903 to deserialize the incoming message 905.
- I/O threads e.g. an I/O thread 903
- incoming messages e.g. a message 905.
- the message bus 901 can use the I/O thread 903 to deserialize the incoming message 905.
- the message bus 901 can pre-process the incoming message 905, before delivering it to the service thread 902. Then, the service thread 902 can further complete processing the pre-processed incoming messages 905, and, if necessary, can send a response 907 back to the service requester that sends the incoming message 905.
- the incoming message 905 can provide a pre-process method 906.
- the message bus 901 can execute the pre-process method 906 associated with the incoming message 905 on the I/O thread 903, in order to partially or fully process the incoming message 905.
- the message pre-processing capability of the scalable message bus can be beneficial when it is used in the distributed data grid 900.
- the system can avoid overburden the service thread, since the service thread can be a bottleneck in the distributed data grid.
- the system avoids the context switches that may be required when moving the message between the I/O thread 903 and the service thread 902. Such context switches can cause a significant percentage of the overall request latency, e.g. in the case of a remote direct memory access (RDMA) based transport.
- RDMA remote direct memory access
- the scalable message bus allows the messages to be fully executed in parallel if the scalable message bus has multiple 10 threads such as in the case of a RDMA based bus.
- the scalable message bus can combine the message pre-processing capability with the parallel message deserialization capability, so that multiple incoming messages can be deserialized and pre-processed in parallel.
- Figure 10 illustrates an exemplary flow chart for supporting message pre-processing in a distributed data grid, in accordance with an embodiment of the invention.
- the system can associate a message bus with a service thread on a cluster member in the distributed data grid.
- the system can receive one or more incoming messages at the message bus using an input/output (I/O) thread, and, at step 1003, the system can pre-process said one or more incoming messages on the I/O thread before each said incoming message is delivered to a service thread in the distributed data grid.
- I/O input/output
- FIG. 11 illustrates a schematic functional block diagram of a system 1100 for message pre-processing in a distributed data grid in accordance with an embodiment of the invention.
- System 1100 includes: one or more microprocessors and a cluster member in the distributed data grid running on the one or more microprocessors.
- System 1100 includes, in the cluster member, an associattion module 1110, a message bus 1120, a service thread 1130, and a pre-processing module 1140.
- Association module 1110 is adapted for associating message bus 1120 with service thread 1130 on a cluster member in the distributed data grid.
- Message bus 1120 is adapted for receiving one or more incoming messages using an input/output (I/O) thread.
- Pre-processing module 1140 is adapted for pre-process the one or more incoming messages on the I/O thread before each incoming message is delivered to service thread 1130 in the data grid.
- I/O input/output
- Figure 12 illustrates a functional block diagram of a system 1200 for message pre-processing in a distributed data grid in accordance with the principles of the present invention as described above.
- the functional blocks of the system 1200 may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the present invention. It is understood by those skilled in the art that the functional blocks described in Figure 12 may be combined or separated into sub-blocks to implement the principles of the present invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein.
- the system 1200 for message pre-processing in a distributed data grid comprises a first associating unit 1202, a receiving unit 1204 and a pre-processing unit 1206.
- the first associating unit 1202 is configured to associate a message bus with a service thread on a cluster member in the distributed data grid.
- the receiving unit 1204 is configured to receive one or more incoming messages at the message bus using an input/output (I/O) thread.
- the pre-processing unit 1 106 is configured to pre-process said one or more incoming messages on the I/O thread before each said incoming message is delivered to a said service thread in the distributed data grid.
- system 1200 further comprises an executing unit 1208 that is configured to execute, via the I/O thread, a pre-process method that is associated with a said incoming message.
- system 1200 further comprises a deserializing unit 1210 that is configured to deserialize each said incoming message using the I/O thread.
- system 1 100 further comprises a handling unit 1212 that is configured to handle a said incoming message completely on the I/O thread and avoid using the service thread.
- system 1200 further comprises a first processing unit 1214 that is configured to process said one or more pre-processed incoming messages on the service thread.
- system 1200 further comprises a sending unit 1216 that is configured to send a response to a service requester that sends a said incoming message.
- system 1200 further comprises a context switch avoiding unit 1218 that is configured to avoid context switch that move said one or more incoming messages between the I/O thread and the service thread.
- the message bus is based on a remote direct memory access (RDMA) protocol.
- RDMA remote direct memory access
- system 1200 further comprises a second associating unit 1220 configured to associate a thread pool with the message bus, wherein the thread pool contains a plurality of I/O threads.
- system 1200 further comprises a second processing unit 1222 that is configured to process said one or more incoming messages on the plurality of I/O threads in parallel.
- the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
- Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
- the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
- the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Bus Control (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Information Transfer Between Computers (AREA)
- Multi Processors (AREA)
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP13765579.1A EP2893688B1 (en) | 2012-09-07 | 2013-09-06 | System and method for supporting message pre-processing in a distributed data grid cluster |
| CN201380046453.1A CN104620558B (zh) | 2012-09-07 | 2013-09-06 | 用于支持分布式数据网格集群中的消息预处理的系统和方法 |
| JP2015531258A JP6276273B2 (ja) | 2012-09-07 | 2013-09-06 | 分散型データグリッドクラスタにおけるメッセージ前処理をサポートするシステムおよび方法 |
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261698216P | 2012-09-07 | 2012-09-07 | |
| US61/698,216 | 2012-09-07 | ||
| US201261701453P | 2012-09-14 | 2012-09-14 | |
| US61/701,453 | 2012-09-14 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014039895A1 true WO2014039895A1 (en) | 2014-03-13 |
Family
ID=49223884
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2013/058610 Ceased WO2014039895A1 (en) | 2012-09-07 | 2013-09-06 | System and method for supporting message pre-processing in a distributed data grid cluster |
| PCT/US2013/058605 Ceased WO2014039890A1 (en) | 2012-09-07 | 2013-09-06 | System and method for supporting a scalable message bus in a distributed data grid cluster |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2013/058605 Ceased WO2014039890A1 (en) | 2012-09-07 | 2013-09-06 | System and method for supporting a scalable message bus in a distributed data grid cluster |
Country Status (5)
| Country | Link |
|---|---|
| US (2) | US9535862B2 (enExample) |
| EP (2) | EP2893688B1 (enExample) |
| JP (2) | JP6310461B2 (enExample) |
| CN (2) | CN104620558B (enExample) |
| WO (2) | WO2014039895A1 (enExample) |
Families Citing this family (58)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2893688B1 (en) * | 2012-09-07 | 2018-10-24 | Oracle International Corporation | System and method for supporting message pre-processing in a distributed data grid cluster |
| CN108322472B (zh) * | 2016-05-11 | 2019-06-25 | 甲骨文国际公司 | 用于提供基于云的身份和访问管理的方法、系统和介质 |
| US10425386B2 (en) | 2016-05-11 | 2019-09-24 | Oracle International Corporation | Policy enforcement point for a multi-tenant identity and data security management cloud service |
| US10341410B2 (en) | 2016-05-11 | 2019-07-02 | Oracle International Corporation | Security tokens for a multi-tenant identity and data security management cloud service |
| US10878079B2 (en) | 2016-05-11 | 2020-12-29 | Oracle International Corporation | Identity cloud service authorization model with dynamic roles and scopes |
| US9781122B1 (en) | 2016-05-11 | 2017-10-03 | Oracle International Corporation | Multi-tenant identity and data security management cloud service |
| US10454940B2 (en) | 2016-05-11 | 2019-10-22 | Oracle International Corporation | Identity cloud service authorization model |
| US9838377B1 (en) | 2016-05-11 | 2017-12-05 | Oracle International Corporation | Task segregation in a multi-tenant identity and data security management cloud service |
| US10581820B2 (en) | 2016-05-11 | 2020-03-03 | Oracle International Corporation | Key generation and rollover |
| US9838376B1 (en) | 2016-05-11 | 2017-12-05 | Oracle International Corporation | Microservices based multi-tenant identity and data security management cloud service |
| US10516672B2 (en) | 2016-08-05 | 2019-12-24 | Oracle International Corporation | Service discovery for a multi-tenant identity and data security management cloud service |
| US10585682B2 (en) | 2016-08-05 | 2020-03-10 | Oracle International Corporation | Tenant self-service troubleshooting for a multi-tenant identity and data security management cloud service |
| US10505941B2 (en) | 2016-08-05 | 2019-12-10 | Oracle International Corporation | Virtual directory system for LDAP to SCIM proxy service |
| US10530578B2 (en) | 2016-08-05 | 2020-01-07 | Oracle International Corporation | Key store service |
| US10255061B2 (en) | 2016-08-05 | 2019-04-09 | Oracle International Corporation | Zero down time upgrade for a multi-tenant identity and data security management cloud service |
| US10735394B2 (en) | 2016-08-05 | 2020-08-04 | Oracle International Corporation | Caching framework for a multi-tenant identity and data security management cloud service |
| US10263947B2 (en) | 2016-08-05 | 2019-04-16 | Oracle International Corporation | LDAP to SCIM proxy service |
| US10484382B2 (en) | 2016-08-31 | 2019-11-19 | Oracle International Corporation | Data management for a multi-tenant identity cloud service |
| US10846390B2 (en) | 2016-09-14 | 2020-11-24 | Oracle International Corporation | Single sign-on functionality for a multi-tenant identity and data security management cloud service |
| US10511589B2 (en) | 2016-09-14 | 2019-12-17 | Oracle International Corporation | Single logout functionality for a multi-tenant identity and data security management cloud service |
| US10594684B2 (en) | 2016-09-14 | 2020-03-17 | Oracle International Corporation | Generating derived credentials for a multi-tenant identity cloud service |
| US10791087B2 (en) | 2016-09-16 | 2020-09-29 | Oracle International Corporation | SCIM to LDAP mapping using subtype attributes |
| US10445395B2 (en) | 2016-09-16 | 2019-10-15 | Oracle International Corporation | Cookie based state propagation for a multi-tenant identity cloud service |
| US10484243B2 (en) | 2016-09-16 | 2019-11-19 | Oracle International Corporation | Application management for a multi-tenant identity cloud service |
| US10567364B2 (en) | 2016-09-16 | 2020-02-18 | Oracle International Corporation | Preserving LDAP hierarchy in a SCIM directory using special marker groups |
| EP3513542B1 (en) | 2016-09-16 | 2021-05-19 | Oracle International Corporation | Tenant and service management for a multi-tenant identity and data security management cloud service |
| US10341354B2 (en) | 2016-09-16 | 2019-07-02 | Oracle International Corporation | Distributed high availability agent architecture |
| US10904074B2 (en) | 2016-09-17 | 2021-01-26 | Oracle International Corporation | Composite event handler for a multi-tenant identity cloud service |
| US10261836B2 (en) | 2017-03-21 | 2019-04-16 | Oracle International Corporation | Dynamic dispatching of workloads spanning heterogeneous services |
| US10454915B2 (en) | 2017-05-18 | 2019-10-22 | Oracle International Corporation | User authentication using kerberos with identity cloud service |
| US10348858B2 (en) | 2017-09-15 | 2019-07-09 | Oracle International Corporation | Dynamic message queues for a microservice based cloud service |
| US10831789B2 (en) | 2017-09-27 | 2020-11-10 | Oracle International Corporation | Reference attribute query processing for a multi-tenant cloud service |
| US11271969B2 (en) | 2017-09-28 | 2022-03-08 | Oracle International Corporation | Rest-based declarative policy management |
| US10834137B2 (en) | 2017-09-28 | 2020-11-10 | Oracle International Corporation | Rest-based declarative policy management |
| US10705823B2 (en) | 2017-09-29 | 2020-07-07 | Oracle International Corporation | Application templates and upgrade framework for a multi-tenant identity cloud service |
| US10715564B2 (en) | 2018-01-29 | 2020-07-14 | Oracle International Corporation | Dynamic client registration for an identity cloud service |
| US10659289B2 (en) | 2018-03-22 | 2020-05-19 | Servicenow, Inc. | System and method for event processing order guarantee |
| US10931656B2 (en) | 2018-03-27 | 2021-02-23 | Oracle International Corporation | Cross-region trust for a multi-tenant identity cloud service |
| US11165634B2 (en) | 2018-04-02 | 2021-11-02 | Oracle International Corporation | Data replication conflict detection and resolution for a multi-tenant identity cloud service |
| US10798165B2 (en) | 2018-04-02 | 2020-10-06 | Oracle International Corporation | Tenant data comparison for a multi-tenant identity cloud service |
| US11258775B2 (en) | 2018-04-04 | 2022-02-22 | Oracle International Corporation | Local write for a multi-tenant identity cloud service |
| US11012444B2 (en) | 2018-06-25 | 2021-05-18 | Oracle International Corporation | Declarative third party identity provider integration for a multi-tenant identity cloud service |
| US10764273B2 (en) | 2018-06-28 | 2020-09-01 | Oracle International Corporation | Session synchronization across multiple devices in an identity cloud service |
| US11693835B2 (en) | 2018-10-17 | 2023-07-04 | Oracle International Corporation | Dynamic database schema allocation on tenant onboarding for a multi-tenant identity cloud service |
| US11321187B2 (en) | 2018-10-19 | 2022-05-03 | Oracle International Corporation | Assured lazy rollback for a multi-tenant identity cloud service |
| CN111510469B (zh) * | 2019-01-31 | 2023-04-25 | 上海哔哩哔哩科技有限公司 | 一种消息处理方法和装置 |
| US11651357B2 (en) | 2019-02-01 | 2023-05-16 | Oracle International Corporation | Multifactor authentication without a user footprint |
| US11061929B2 (en) | 2019-02-08 | 2021-07-13 | Oracle International Corporation | Replication of resource type and schema metadata for a multi-tenant identity cloud service |
| US11321343B2 (en) | 2019-02-19 | 2022-05-03 | Oracle International Corporation | Tenant replication bootstrap for a multi-tenant identity cloud service |
| US11669321B2 (en) | 2019-02-20 | 2023-06-06 | Oracle International Corporation | Automated database upgrade for a multi-tenant identity cloud service |
| US11792226B2 (en) | 2019-02-25 | 2023-10-17 | Oracle International Corporation | Automatic api document generation from scim metadata |
| US11423111B2 (en) | 2019-02-25 | 2022-08-23 | Oracle International Corporation | Client API for rest based endpoints for a multi-tenant identify cloud service |
| US11687378B2 (en) | 2019-09-13 | 2023-06-27 | Oracle International Corporation | Multi-tenant identity cloud service with on-premise authentication integration and bridge high availability |
| US11870770B2 (en) | 2019-09-13 | 2024-01-09 | Oracle International Corporation | Multi-tenant identity cloud service with on-premise authentication integration |
| US11611548B2 (en) | 2019-11-22 | 2023-03-21 | Oracle International Corporation | Bulk multifactor authentication enrollment |
| US11863469B2 (en) * | 2020-05-06 | 2024-01-02 | International Business Machines Corporation | Utilizing coherently attached interfaces in a network stack framework |
| US12039382B2 (en) | 2022-02-28 | 2024-07-16 | Bank Of America Corporation | Real time intelligent message bus management tool |
| CN119363554B (zh) * | 2024-10-24 | 2025-10-21 | 中国民航信息网络股份有限公司 | 基于服务总线的服务管理方法及装置、电子设备 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020078126A1 (en) * | 1999-06-15 | 2002-06-20 | Thomas Higgins | Broadband interconnection system |
| US20050080982A1 (en) * | 2003-08-20 | 2005-04-14 | Vasilevsky Alexander D. | Virtual host bus adapter and method |
| EP1684173A1 (en) * | 2004-12-30 | 2006-07-26 | Microsoft Corporation | Bus abstraction system and method for unifying device discovery and message transport in multiple bus implementations and networks |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1185673A (ja) * | 1997-09-02 | 1999-03-30 | Hitachi Ltd | 共有バスの制御方法とその装置 |
| US8099488B2 (en) * | 2001-12-21 | 2012-01-17 | Hewlett-Packard Development Company, L.P. | Real-time monitoring of service agreements |
| US7716061B2 (en) * | 2003-03-27 | 2010-05-11 | International Business Machines Corporation | Method and apparatus for obtaining status information in a grid |
| JP2005056408A (ja) * | 2003-07-23 | 2005-03-03 | Semiconductor Energy Lab Co Ltd | マイクロプロセッサ及びグリッドコンピューティングシステム |
| US20070168454A1 (en) * | 2006-01-19 | 2007-07-19 | International Business Machines Corporation | System and method for host-to-host communication |
| JP2008152594A (ja) * | 2006-12-19 | 2008-07-03 | Hitachi Ltd | マルチコアプロセッサ計算機の高信頼化方法 |
| US8751626B2 (en) * | 2007-10-23 | 2014-06-10 | Microsoft Corporation | Model-based composite application platform |
| JP5047072B2 (ja) * | 2008-06-20 | 2012-10-10 | 三菱電機株式会社 | データ転送システム及び転送装置及び監視装置及び転送プログラム及び監視プログラム |
| JP2010134724A (ja) * | 2008-12-05 | 2010-06-17 | Canon It Solutions Inc | メッセージキューイング監視装置、メッセージキューイング監視方法、プログラム、及び記録媒体 |
| JP5236581B2 (ja) * | 2009-06-16 | 2013-07-17 | 新日鉄住金ソリューションズ株式会社 | 送信装置、その制御方法、プログラム及び情報処理システム |
| CN102262557B (zh) * | 2010-05-25 | 2015-01-21 | 运软网络科技(上海)有限公司 | 通过总线架构构建虚拟机监控器的方法及性能服务框架 |
| CN103562882B (zh) * | 2011-05-16 | 2016-10-12 | 甲骨文国际公司 | 用于提供消息传送应用程序接口的系统和方法 |
| US8805984B2 (en) * | 2011-07-14 | 2014-08-12 | Red Hat, Inc. | Multi-operational transactional access of in-memory data grids in a client-server environment |
| EP2893688B1 (en) * | 2012-09-07 | 2018-10-24 | Oracle International Corporation | System and method for supporting message pre-processing in a distributed data grid cluster |
-
2013
- 2013-09-06 EP EP13765579.1A patent/EP2893688B1/en active Active
- 2013-09-06 CN CN201380046453.1A patent/CN104620558B/zh active Active
- 2013-09-06 CN CN201380046457.XA patent/CN104620559B/zh active Active
- 2013-09-06 US US14/020,412 patent/US9535862B2/en active Active
- 2013-09-06 JP JP2015531256A patent/JP6310461B2/ja active Active
- 2013-09-06 WO PCT/US2013/058610 patent/WO2014039895A1/en not_active Ceased
- 2013-09-06 EP EP13766771.3A patent/EP2893689B1/en active Active
- 2013-09-06 US US14/020,422 patent/US9535863B2/en active Active
- 2013-09-06 JP JP2015531258A patent/JP6276273B2/ja active Active
- 2013-09-06 WO PCT/US2013/058605 patent/WO2014039890A1/en not_active Ceased
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020078126A1 (en) * | 1999-06-15 | 2002-06-20 | Thomas Higgins | Broadband interconnection system |
| US20050080982A1 (en) * | 2003-08-20 | 2005-04-14 | Vasilevsky Alexander D. | Virtual host bus adapter and method |
| EP1684173A1 (en) * | 2004-12-30 | 2006-07-26 | Microsoft Corporation | Bus abstraction system and method for unifying device discovery and message transport in multiple bus implementations and networks |
Non-Patent Citations (2)
| Title |
|---|
| SAYANTAN SUR ET AL: "RDMA read based rendezvous protocol for MPI over InfiniBand", PROCEEDINGS OF THE ELEVENTH ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING , PPOPP '06, 1 January 2006 (2006-01-01), New York, New York, USA, pages 32, XP055035735, ISBN: 978-1-59-593189-4, DOI: 10.1145/1122971.1122978 * |
| TABOADA G L ET AL: "Efficient Java Communication Libraries over InfiniBand", HIGH PERFORMANCE COMPUTING AND COMMUNICATIONS, 2009. HPCC '09. 11TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 25 June 2009 (2009-06-25), pages 329 - 338, XP031491412, ISBN: 978-1-4244-4600-1 * |
Also Published As
| Publication number | Publication date |
|---|---|
| EP2893688B1 (en) | 2018-10-24 |
| EP2893688A1 (en) | 2015-07-15 |
| CN104620559A (zh) | 2015-05-13 |
| EP2893689A1 (en) | 2015-07-15 |
| CN104620559B (zh) | 2018-03-27 |
| US20140075078A1 (en) | 2014-03-13 |
| CN104620558A (zh) | 2015-05-13 |
| JP6276273B2 (ja) | 2018-02-07 |
| JP2015527681A (ja) | 2015-09-17 |
| US9535862B2 (en) | 2017-01-03 |
| CN104620558B (zh) | 2018-02-16 |
| WO2014039890A1 (en) | 2014-03-13 |
| JP2015531512A (ja) | 2015-11-02 |
| US20140075071A1 (en) | 2014-03-13 |
| EP2893689B1 (en) | 2019-03-13 |
| JP6310461B2 (ja) | 2018-04-11 |
| US9535863B2 (en) | 2017-01-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2893688B1 (en) | System and method for supporting message pre-processing in a distributed data grid cluster | |
| US9787561B2 (en) | System and method for supporting a selection service in a server environment | |
| EP3039844B1 (en) | System and method for supporting partition level journaling for synchronizing data in a distributed data grid | |
| EP3087483B1 (en) | System and method for supporting asynchronous invocation in a distributed data grid | |
| CN105100185B (zh) | 事务中间件机器环境中处理数据库状态通知的系统和方法 | |
| US20160077739A1 (en) | System and method for supporting a low contention queue in a distributed data grid | |
| US9672038B2 (en) | System and method for supporting a scalable concurrent queue in a distributed data grid | |
| US10067841B2 (en) | Facilitating n-way high availability storage services | |
| US20140089260A1 (en) | Workload transitioning in an in-memory data grid | |
| US20150169367A1 (en) | System and method for supporting adaptive busy wait in a computing environment | |
| US20160011929A1 (en) | Methods for facilitating high availability storage services in virtualized cloud environments and devices thereof | |
| WO2016122723A1 (en) | Methods for facilitating n-way high availability storage services and devices thereof | |
| WO2015099974A1 (en) | System and method for supporting asynchronous invocation in a distributed data grid |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13765579 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2015531258 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2013765579 Country of ref document: EP |