WO2015099974A1 - System and method for supporting asynchronous invocation in a distributed data grid - Google Patents
System and method for supporting asynchronous invocation in a distributed data grid Download PDFInfo
- Publication number
- WO2015099974A1 WO2015099974A1 PCT/US2014/068659 US2014068659W WO2015099974A1 WO 2015099974 A1 WO2015099974 A1 WO 2015099974A1 US 2014068659 W US2014068659 W US 2014068659W WO 2015099974 A1 WO2015099974 A1 WO 2015099974A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- server
- data grid
- distributed data
- tasks
- unit
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2035—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant without idle spare hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/2028—Failover techniques eliminating a faulty processor or activating a spare
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2048—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
Definitions
- the present invention is generally related to computer systems, and is particularly related to supporting task management in a distributed data grid.
- Described herein are systems and methods that can support asynchronous invocation in a distributed data grid with a plurality of server nodes.
- the system allows a server node in the distributed data grid to receive one or more tasks from a client, wherein said one or more tasks are associated with a unit-of-order.
- the system can execute said one or more tasks on one or more said server nodes in the distributed data grid, based on the unit-of-order that is guaranteed by the distributed data grid.
- Figure 1 is an illustration of a data grid cluster in accordance with various embodiments of the invention.
- Figure 2 shows an illustration of supporting pluggable association/unit-of-order in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 3 shows an illustration of supporting asynchronous invocation in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 4 illustrates an exemplary flow chart for supporting asynchronous message processing in a distributed data grid in accordance with an embodiment of the invention.
- Figure 5 shows an illustration of supporting delegatable flow control in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 6 shows an illustration of performing backlog draining in a distributed data grid, in accordance with an embodiment of the invention.
- Figure 7 shows an illustration of providing a future task to a distributed data grid, in accordance with an embodiment of the invention.
- Figure 8 illustrates an exemplary flow chart for supporting delegatable flow control in a distributed data grid in accordance with an embodiment of the invention.
- Figure 9 shows a block diagram of a system for supporting asynchronous invocation in a distributed data grid in accordance with an embodiment of the invention.
- Figure 10 illustrates a functional configuration of an embodiment of the invention.
- Figure 11 illustrates a computer system with an embodiment of the invention implemented with the computer system.
- Described herein are systems and methods that can support task management, such as asynchronous invocation and flow control, in a distributed data grid.
- a “data grid cluster”, or “data grid” is a system comprising a plurality of computer servers which work together to manage information and related operations, such as computations, within a distributed or clustered environment.
- the data grid cluster can be used to manage application objects and data that are shared across the servers.
- a data grid cluster should have low response time, high throughput, predictable scalability, continuous availability and information reliability. As a result of these capabilities, data grid clusters are well suited for use in computational intensive, stateful middle-tier applications.
- data grid clusters can store the information in-memory to achieve higher performance, and can employ redundancy in keeping copies of that information synchronized across multiple servers, thus ensuring resiliency of the system and the availability of the data in the event of server failure.
- the Oracle Coherence data grid cluster provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol.
- An in-memory data grid can provide the data storage and management capabilities by distributing data over a number of servers working together.
- the data grid can be middleware that runs in the same tier as an application server or within an application server.
- the in-memory data grid can eliminate single points of failure by automatically and transparently failing over and redistributing its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it can automatically join the cluster and services can be failed back over to it, transparently redistributing the cluster load.
- the data grid can also include network-level fault tolerance features and transparent soft re-start capability.
- the functionality of a data grid cluster is based on using different cluster services.
- the cluster services can include root cluster services, partitioned cache services, and proxy services.
- each cluster node can participate in a number of cluster services, both in terms of providing and consuming the cluster services.
- Each cluster service has a service name that uniquely identifies the service within the data grid cluster, and a service type, which defines what the cluster service can do.
- the services can be either configured by the user, or provided by the data grid cluster as a default set of services.
- FIG. 1 is an illustration of a data grid cluster in accordance with various embodiments of the invention.
- a data grid cluster 100 e.g. an Oracle Coherence data grid cluster, includes a plurality of server nodes (such as cluster nodes 101-106) having various cluster services 1 1 1 -1 16 running thereon. Additionally, a cache configuration file 1 10 can be used to configure the data grid cluster 100.
- the distributed data grid can support pluggable association/unit-of-order in a distributed data grid.
- FIG. 2 shows an illustration of supporting pluggable association/unit-of-order in a distributed data grid, in accordance with an embodiment of the invention.
- a distributed data grid 201 can include a plurality of server nodes, e.g. server nodes 21 1-216.
- the distributed data grid 201 can receive one or more tasks, e.g. tasks A-C 221-223, from the clients. Then, the distributed data grid 201 can distribute the tasks A-C 221-223 to different server nodes for execution.
- the server node 21 1 can be responsible for executing the task A 221
- the server node 214 can be responsible for executing the task C 223
- the server node 215 can be responsible for executing the task B 222.
- the computing system 200 allows the tasks A-C 221 -223 to be associated with a unit-of-order220 (or an association).
- a unit-of-order 220 is a partial-ordering scheme that does not impose a system-wide order of updates (i.e. not a total ordering scheme).
- the unit-of-order 220 can be a transactional stream, where every operation in this particular stream is preserved in-order, but no order is implied to operations that happen in other streams.
- the distributed data grid 201 can provide a unit-of-order guarantee 210, which can be supported based on a peer-to-peer clustering protocol.
- the system can ensure that the tasks A-C 221-223 are executed by the distributed data grid 201 in a particular order as prescribed in the unit-of-order 220, even though the tasks A-C 221-223 may be received and executed on different server nodes 21 1-216 in the distributed data grid 201 .
- the unit-of-order 220 can be configured in a pluggable fashion, i.e., a client can change the unit-of-order 220 dynamically. Request Ordering/Causality during Failover
- the distributed data grid can support request ordering/causality during failover.
- FIG. 3 shows an illustration of supporting asynchronous invocation in a distributed data grid, in accordance with an embodiment of the invention.
- a server node in the distributed data grid 301 can function as a primary server 31 1 , which is responsible for executing one or more tasks 321 received from a client 302.
- the primary server 31 1 can be associated with one or more back-up server nodes, e.g. a back-up server 312. As shown in Figure 3, after the primary server 31 1 executes the tasks 321 received from the client 302, the primary server 31 1 can send different results and artifacts 322 to the back-up server 312. In accordance with an embodiment of the invention, the primary server 31 1 may wait for receiving an acknowledgement from the back-up server 312 before returning the results 324 to the client 302.
- the back-up server 312 may take over and can be responsible for executing the failover tasks 323.
- the back-up server 312 can check whether each of the failover tasks 323 has already been executed by the primary server 31 1 . For example, when a particular failover task 323 has already been executed by the primary server 31 1 , the back-up server 312 can return the results 324 back to the client 302 immediately. Otherwise, the back-up server 312 can proceed to execute the failover task 323 before returning the results 324 back to the client.
- the back-up server 312 can determine when to execute the failovertasks 323, based on the request ordering in the unit-of-order guarantee 310. In other words, the system can make sure that the failover tasks 323 are executed accordingly to the right order, even when a failover happens in the distributed data grid 301.
- the computing system 300 can ensure both the idempotency in executing the one or more tasks 321 received from the client 302 and the request ordering as provided by the unit-of-order guarantee 310 in the distributed data grid 301.
- FIG. 4 illustrates an exemplary flow chart for supporting asynchronous message processing in a distributed data grid in accordance with an embodiment of the invention.
- a server node in a distributed data grid with a plurality of server nodes can receive one or more tasks.
- the system allows said one or more tasks to be associated with a unit-of-order.
- the system can execute said one or more tasks on one or more said server nodes based on the unit-of-order that is guaranteed by the distributed data grid. Delegatable Flow Control
- the distributed data grid can expose the flow control mechanism to an outside client and allows for delegatable flow control.
- FIG. 5 shows an illustration of supporting delegatable flow control in a distributed data grid, in accordance with an embodiment of the invention.
- distributed data grid 501 can receive one or more tasks from a client 502. Furthermore, the distributed data grid 501 can use an underlying layer 503 for executing the received tasks.
- the underlying layer 503 can include a plurality of server nodes 51 1-516 that are interconnected using one or more communication channels 510.
- the delay in the distributed data grid 501 which may contribute to a backlog of tasks, can include both the delay on the server nodes 51 1-516 for processing the tasks and the delay in the communication channels 510 for transporting the tasks and related artifacts such as the results.
- the computing system 500 supports a flow control mechanism 520 that controls the execution of the tasks in an underlying layer 503 in the distributed data grid 501.
- the flow control mechanism 520 can provide different communication facilities that supports an asynchronous (non-blocking) way of submitting data exchange requests and provides various mechanisms for modulating the control flow for underlying data transfer units (e.g. messages or packets).
- underlying data transfer units e.g. messages or packets
- the flow control mechanism 520 can support request buffering 522 and backlog detection 521 capabilities.
- the request buffering 522 represents that the distributed data grid 501 is able to buffer the incoming requests distributedly in various server nodes 51 1-516 in the distributed data grid 501 .
- the backlog detection 521 represents that the distributed data grid 501 is able to detect the backlogs in processing the buffered request at different server nodes 51 1-516 in the underlying layer 503 (e.g. using a peer-to-peer protocol).
- the system allows a client to interact with the flow control mechanism 520.
- the flow control mechanism 520 can represent (or provide) a facet of a communication end point for a client 502.
- the Oracle Coherence data grid cluster can provide an application programming interface (API) to the client 502.
- API application programming interface
- the client 502 can dynamically configure the flow control mechanism 520 via a simple and convenient interface.
- the flow control mechanism 520 may allow the client 502 to opt-out from an automatic flow control (which is desirable in many cases) and manually govern the rate of the request flow.
- the flow control mechanism 520 may be preferable to be manual in various scenarios, such as an "auto-flush" use case and other use cases with backlog-related delays when the caller is a part of an asynchronous communication flow by itself.
- the computing system 500 can set a threshold in the flow control mechanism 520, wherein the threshold can regulate the backlog of tasks to be executed in the distributed data grid 501. For example, when the length of the backlog of tasks to be executed in the distributed data grid 501 exceeds the threshold, the distributed data grid 501 can either reject a request for executing said tasks, or reconfigure the tasks to be executed at a later time (i.e., reconfiguring a synchronous task to an asynchronous task).
- Figure 6 shows an illustration of performing backlog draining in a distributed data grid, in accordance with an embodiment of the invention.
- a calling thread 602 in the computing system 600 which is associated with a client, can check for an excessive backlog 620 that relates to a distributed request buffer 61 1 in the distributed data grid 601.
- the client i.e. via the calling thread 602 can provide the distributed data grid 601 with information about the maximum amount of time it can wait (e.g. in milliseconds) 621.
- the distributed data grid 601 can provide the calling thread 602 with the information on the remaining timeouts 622. Then, the distributed data grid 601 can block the calling thread 602 while draining the backlog 620 (i.e. dispatching the buffered tasks in the request buffer 61 1 to the underlying layer 610 for execution).
- Figure 7 shows an illustration of providing a future task to a distributed data grid, in accordance with an embodiment of the invention.
- a calling thread 702 in the computing system 700 which is associated with a client, can check for an excessive backlog 720 that relates to a distributed request buffer 71 1 in the distributed data grid 701 .
- the client i.e. via the calling thread 702 can provide the distributed data grid 701 with a future task, e.g. a continuation 703, if the backlog 720 is abnormal (e.g. when the underlying communication channel is clogged).
- the distributed data grid 701 can call the continuation 703.
- the system can dispatch the task contained in the continuation 703 to the underlying layer 710 for execution.
- the continuation 703 can be called on any thread, including a thread 704 that is concurrent with the calling thread 702. Also, the continuation 703 can be called by the calling thread 702 itself.
- Figure 8 illustrates an exemplary flow chart for supporting delegatable flow control in a distributed data grid in accordance with an embodiment of the invention.
- the system can provide a flow control mechanism in the distributed data grid, wherein the distributed data grid includes a plurality of server nodes that are interconnected with one or more communication channels.
- the system allows a client to interact with the flow control mechanism in the distributed data grid.
- the system can use the flow control mechanism for configuring and executing one or more tasks that are received from the client.
- Figure 9 shows a block diagram of a system 900 for supporting asynchronous invocation in a distributed data grid in accordance with an embodiment of the invention.
- the blocks of the system 900 may be implemented by hardware, software, or a combination of hardware and software to carry out the principles of the invention. It is understood by those skilled in the art that the blocks described in Figure 9 may be combined or separated into sub- blocks to implement the principles of the invention as described above. Therefore, the description herein may support any possible combination or separation or further definition of the functional blocks described herein.
- the system 900 for supporting asynchronous invocation in a distributed data grid comprises a receiving unit 910, an associating unit 920, and an executing unit 930.
- the receiving unit 910 can receive one or more tasks at a server node in the distributed data grid with a plurality of server nodes.
- the associating unti 920 can associate said one or more tasks with a unit-of-order.
- the executing unit 930 can execute said one or more tasks on one or more said server nodes, based on the unit-of-order that is guaranteed by the distributed data grid.
- the plurality of server nodes in the distributed data grid uses a peer-to-peer clustering protocol to support a unit-of-order guarantee.
- At least one task is received at another server node in the distributed data grid.
- the at least one task is also associated with the unit-of-order.
- the at least one task is executed at a server node in the distributed data grid, based on the unit-of-order guaranteed by the distributed data grid.
- a server node is a primary server for executing the one or more tasks in the distributed data grid, and at least one another server in the distributed data grid is used as a back-up server for executing the one or more tasks.
- the primary server operates to send results and artifacts that are associated with an execution of the one or more tasks to the back-up server, before returning the results to the client.
- the back-up server operates to check whether the one or more tasks have been executed by the primary server, when the primary server fails.
- the back-up server operates to return the results to the client, if the one or more tasks have been executed by the primary server.
- the back-up server operates to determine when to execute the one or more tasks based on an the unit-of-order guaranteed by the distributed data grid, if the one or more tasks have not been executed by the primary server, and return the results to a client after executing the one or more tasks.
- API Application Programming Interface
- API application programming interface
- the FlowControl interface can include a flush() function, which may be a non-blocking call. Furthermore, the flush() function ensures that the buffered asynchronous operations are dispatched to the underlying tier.
- the FlowControl interface can include a drainBacklog(long cMillis) function, which can check for an excessive backlog in the distributed data grid and allows for blocking the calling thread for up to a specified amount of time.
- the drainBacklog(long cMillis) function can take an input parameter, cMillis, which specifies the maximum amount of time to wait (e.g. in milliseconds).
- the input parameter, cMillis can be specified as zero, which indicates an infinite waiting time.
- the drainBacklog(long cMillis) function can return the remaining timeout to the calling thread.
- the drainBacklog(long cMillis) function can return a negative value if timeout has occurred. Additionally, the drainBacklog(long cMillis) function can return zero, which indicates that the backlog is no longer excessive.
- the above FlowControl interface can include a checkBacklog(Continuation ⁇ Void> continueNormal) function, which checks for an excessive backlog.
- the checkBacklog(Continuation ⁇ Void> continueNormal) function can return true if the underlying communication channel is backlogged or return false if otherwise.
- checkBacklog(Continuation ⁇ Void> continueNormal) function can provide a future work, e.g. using an input parameter, continueNormal.
- continueNormal can be called afterthe backlog is reduced back to normal.
- future work, continueNormal can be called by any thread, which is concurrent with the calling thread, or by the calling thread itself. Additionally, the continuation is called only when if the checkBacklog(Continuation ⁇ Void> continueNormal) function returns true.
- System 1000 includes a receiver module 1010, a storage module 1020, a manager 1030, and a task executor module 1040.
- Receiver module 1010 is provided at a server node in the distributed data grid with a plurality of server nodes. Receiver module 1010 receives one or more tasks. Storage module 1020 stores a unit-of-order. Manager 1030 allows the received one or more tasks to be associated with the unit-of-order. Task executor module 1040 executes the one or more tasks on one or more server nodes, based on the unit-of-order that is guaranteed by the distributed data grid.
- FIG 11 shows an illustration of a computer system 1 100 which includes well-known hardware elements.
- computer system 1 100 includes a central processing unit (CPU) 1 1 10, a mouse 1 120, a key board 1 130, a random access memory (RAM) 1 140, a hard disc 1 150, a disc drive 1 160, a communication interface (l/F) 1 170, and a monitor 1 180.
- Computer system 1 100 may be function as a server node constituting system 1000.
- receiver module 1010, manager 1030, storage module 1020, and task executor module 1040 are provided by one or more computer systems 1000.
- Receiver module 1010, manager 1030, storage module 1020, and task executor module 1040 are implemented by CPU 1010.
- more than one processors can be used so that receiver module 1010, manager 1030, storage module 1020, and task executor module 1040 are implemented. Namely, any of receiver module 1010, manager 1030, storage module 1020, and task executor module 1040 can be physically remote from each other.
- system 1000 can be realized by using a plurality of hardwired circuits which function as receiver module 1010, manager 1030, storage module 1020, and task executor module 1040.
- the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
- Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
- the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
- the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201480070817.4A CN105874433B (en) | 2013-12-27 | 2014-12-04 | System and method for supporting asynchronous calls in a distributed data grid |
EP14825490.7A EP3087483B1 (en) | 2013-12-27 | 2014-12-04 | System and method for supporting asynchronous invocation in a distributed data grid |
JP2016542156A JP6615761B2 (en) | 2013-12-27 | 2014-12-04 | System and method for supporting asynchronous calls in a distributed data grid |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361921320P | 2013-12-27 | 2013-12-27 | |
US61/921,320 | 2013-12-27 | ||
US14/322,540 | 2014-07-02 | ||
US14/322,562 US9846618B2 (en) | 2013-12-27 | 2014-07-02 | System and method for supporting flow control in a distributed data grid |
US14/322,562 | 2014-07-02 | ||
US14/322,540 US9703638B2 (en) | 2013-12-27 | 2014-07-02 | System and method for supporting asynchronous invocation in a distributed data grid |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015099974A1 true WO2015099974A1 (en) | 2015-07-02 |
Family
ID=52345506
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2014/068659 WO2015099974A1 (en) | 2013-12-27 | 2014-12-04 | System and method for supporting asynchronous invocation in a distributed data grid |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2015099974A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020103816A1 (en) * | 2001-01-31 | 2002-08-01 | Shivaji Ganesh | Recreation of archives at a disaster recovery site |
US20080263106A1 (en) * | 2007-04-12 | 2008-10-23 | Steven Asherman | Database queuing and distributed computing |
US8046780B1 (en) * | 2005-09-20 | 2011-10-25 | Savi Technology, Inc. | Efficient processing of assets with multiple data feeds |
-
2014
- 2014-12-04 WO PCT/US2014/068659 patent/WO2015099974A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020103816A1 (en) * | 2001-01-31 | 2002-08-01 | Shivaji Ganesh | Recreation of archives at a disaster recovery site |
US8046780B1 (en) * | 2005-09-20 | 2011-10-25 | Savi Technology, Inc. | Efficient processing of assets with multiple data feeds |
US20080263106A1 (en) * | 2007-04-12 | 2008-10-23 | Steven Asherman | Database queuing and distributed computing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10050857B2 (en) | System and method for supporting a selection service in a server environment | |
EP3087483B1 (en) | System and method for supporting asynchronous invocation in a distributed data grid | |
US9535863B2 (en) | System and method for supporting message pre-processing in a distributed data grid cluster | |
JP4637842B2 (en) | Fast application notification in clustered computing systems | |
US20160119197A1 (en) | System and method for supporting service level quorum in a data grid cluster | |
EP3039844B1 (en) | System and method for supporting partition level journaling for synchronizing data in a distributed data grid | |
US10769019B2 (en) | System and method for data recovery in a distributed data computing environment implementing active persistence | |
US9569224B2 (en) | System and method for adaptively integrating a database state notification service with a distributed transactional middleware machine | |
CN105830029B (en) | system and method for supporting adaptive busy-wait in a computing environment | |
CN101442437B (en) | Method, system and equipment for implementing high availability | |
US10348814B1 (en) | Efficient storage reclamation for system components managing storage | |
WO2015099974A1 (en) | System and method for supporting asynchronous invocation in a distributed data grid | |
US7558858B1 (en) | High availability infrastructure with active-active designs | |
US11570110B2 (en) | Cloud system architecture and workload management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14825490 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016542156 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014825490 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014825490 Country of ref document: EP |