WO2013049399A1 - System and method for providing and managing message queues for multinode applications in a middleware machine environment - Google Patents

System and method for providing and managing message queues for multinode applications in a middleware machine environment Download PDF

Info

Publication number
WO2013049399A1
WO2013049399A1 PCT/US2012/057634 US2012057634W WO2013049399A1 WO 2013049399 A1 WO2013049399 A1 WO 2013049399A1 US 2012057634 W US2012057634 W US 2012057634W WO 2013049399 A1 WO2013049399 A1 WO 2013049399A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
data structure
shared memory
queue
control data
Prior art date
Application number
PCT/US2012/057634
Other languages
French (fr)
Inventor
Richard Frank
Todd Little
Arun KAIMALETTU
Leonard TOMINNA
Original Assignee
Oracle International Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle International Corporation filed Critical Oracle International Corporation
Priority to IN1390CHN2014 priority Critical patent/IN2014CN01390A/en
Priority to EP12773178.4A priority patent/EP2761454A1/en
Priority to JP2014533333A priority patent/JP6238898B2/en
Priority to CN201280047474.0A priority patent/CN103827829B/en
Priority to KR1020147009464A priority patent/KR102011949B1/en
Publication of WO2013049399A1 publication Critical patent/WO2013049399A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Definitions

  • the present invention is generally related to computer systems and software such as middleware, and is particularly related to supporting a transactional middleware machine environment.
  • a transactional middleware system or transaction oriented middleware, includes enterprise application servers that can process various transactions within an organization.
  • enterprise application servers that can process various transactions within an organization.
  • the transactional middleware machine environment includes a message control data structure on a message receiver and a heap data structure in a shared memory that is associated with the message receiver.
  • the message sender operates to write a message directly into the heap data structure, and to maintain metadata associated with the message in the message control data structure.
  • the middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment.
  • the middleware machine environment includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages.
  • Figure 1 shows an illustration of providing message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 2 illustrates an exemplary flow chart for supporting accurate load balance in a middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 3 shows an illustration of providing remote memory rings for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 4 shows an illustration of a message queue that can be concurrently accessed by multiple message senders in a middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 5 shows an illustration of using System V message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • FIG. 6 shows an illustration of a remote direct memory access (RDMA) message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • RDMA remote direct memory access
  • Figure 7 shows an illustration of a daemon process that can create and manage a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 8 illustrates an exemplary flow chart for supporting accurate load balance in a transactional middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 9 shows an illustration of a security model that can be used to protect a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
  • Figure 10 illustrates an exemplary flow chart for protecting a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
  • Described herein is a system and method for supporting a transactional middleware system that can take advantage of fast machines with multiple processors, and a high performance network connection in a transactional middleware machine environment.
  • the system can provide message queues for multinode applications using a data structure based on a ring buffer (a circular queue).
  • the system includes a remote ring structure with a first ring structure on a reader and a second ring structure on a writer, wherein each of the first ring structure and the second ring structure has a head pointer and a tail pointer.
  • the writer operates to write a message to the remote ring, the writer can update the head pointers for both the first ring structure and the second ring structure, and the data in the remote ring structure.
  • the reader When the reader operates to read a message from the remote ring, the reader can update the tail pointers for both the first ring structure and the second ring structure. Additionally, the message can be stored in a heap data structure, while the metadata associated with the message can be stored in the remote ring structure.
  • the system comprises a combination of high performance hardware, e.g. 64-bit processor technology, high performance large memory, and redundant InfiniBand and Ethernet networking, together with an application server or middleware environment, such as WebLogic Suite, to provide a complete Java EE application server complex which includes a massively parallel in-memory grid, that can be provisioned quickly, and can scale on demand.
  • an application server or middleware environment such as WebLogic Suite
  • the system can be deployed as a full, half, or quarter rack, or other configuration, that provides an application server grid, storage area network, and InfiniBand (IB) network.
  • the middleware machine software can provide application server, middleware and other functionality such as, for example, WebLogic Server, JRockit or Hotspot JVM, Oracle Linux or Solaris, and Oracle VM.
  • the system can include a plurality of compute nodes, IB switch gateway, and storage nodes or units, communicating with one another via an IB network. When implemented as a rack configuration, unused portions of the rack can be left empty or occupied by fillers.
  • the system is an easy-to-deploy solution for hosting middleware or application server software, such as the Oracle Middleware SW suite, or Weblogic.
  • middleware or application server software such as the Oracle Middleware SW suite, or Weblogic.
  • the system is a "grid in a box” that comprises one or more servers, storage units, an IB fabric for storage networking, and all the other components required to host a middleware application.
  • Significant performance can be delivered for all types of middleware applications by leveraging a massively parallel grid architecture using, e.g. Real Application Clusters and Exalogic Open storage.
  • the system delivers improved performance with linear I/O scalability, is simple to use and manage, and delivers mission-critical availability and reliability.
  • Tuxedo is a set of software modules that enables the construction, execution, and administration of high performance, distributed business applications and has been used as transactional middleware by a number of multi-tier application development tools.
  • Tuxedo is a middleware platform that can be used to manage distributed transaction processing in distributed computing environments. It is a proven platform for unlocking enterprise legacy applications and extending them to a services oriented architecture, while delivering unlimited scalability and standards-based interoperability.
  • a middleware machine environment can provide message queues for multinode applications.
  • the transactional middleware machine environment includes a message control data structure on a message receiver and a heap data structure in a shared memory that is associated with the message receiver.
  • the message sender operates to write a message directly into the heap data structure, and to maintain metadata associated with the message in the message control data structure.
  • the message control data structure can be a ring structure with a head pointer and a tail pointer.
  • the message receiver resides on a server that is connected with a plurality of clients, with each of said clients keeping a private copy of the message control data structure. Also, the message receiver can support concurrent access to the message control data structure associated with the message receiver.
  • a middleware machine environment can manage message queues for multimode applications.
  • the middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment.
  • the middleware machine environment further includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages. Additionally, different processes on a client operate to use at least one proxy to communicate with the message server.
  • the middleware machine environment can protect message queues for multimode applications using a security token created by the daemon process.
  • messaging software such as messaging queues
  • RDMA remote direct memory access
  • the RDMA protocol allows a message sender to bypass OS kernels and directly access to the memory without a need to wake up a process on the remote machine.
  • FIG. 1 shows an illustration of providing message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • a middleware machine environment 100 can include multiple server machines, such as Machine A 101 and Machine B 102.
  • a message sender 103 on a local machine, e.g. Machine A 101 can send a message 107 to a message receiver 104 on a remote machine, e.g. Machine B 102.
  • the message receiver 104 on the remote Machine B 102 can use a shared memory 106 that includes a message queue or a message control data structure 108 and a heap data structure 110.
  • a message queue can contain only the metadata information that is associated with the message, while the heap data structure contains the physical message.
  • messages with variable size can be easily accommodated and be stored in the shared memory.
  • the message sender 103 operates to write the message directly into the heap data structure 110, and maintain metadata associated with the message in the message control data structure 108.
  • the message sender 103 includes a message control data structure 105 on the local machine, Machine A 101 .
  • the message control data structure 105 can be a copy of the message queue 108 for the message sender 103.
  • the message sender on the local Machine A 101 can further maintain metadata associated with the message in the message control data structure 105 on the local Machine A 101.
  • a message sender on a local Machine A 101 can directly write a message into heap data structure 1 10 in a shared memory 106 on the remote Machine B 102.
  • the message sender 103 can bypass the OS kernel on the remote Machine B 102, with the addressing information provided by the message receiver 104.
  • the message sender 103 on the local Machine A 101 can update the status information of the message such as an input sequence number in the queue in the remote Machine B 102 via the control structure on the local Machine A 101 .
  • the message sender 103 on the local Machine A 101 can send a message to a message receiver 104 regardless the size of the message.
  • this messaging mechanism can be cost effective, efficient, and requires less overhead for large volume of data.
  • the message sender 103 can wake up a process 1 12 on the remote Machine B 102 that is associated with the message receiver 104, according to a pre-configured procedure. For example, the message sender can wake up the process when a service request message that can be handled by the process has been delivered. In another example, the message sender can wake up a daemon process on the remote Machine B 102 when the queue is full.
  • the process can notice the message control structure 105 on the client side, and/or provide a procedure to the message control structure 105 on how to wake itself up. Then, the process on the receiver side can wait for the delivery of the message. For example, a process that is expecting the message can be in a sleeping status until it is wakened up by the message sender.
  • the message receiver can take the message out from the queue, in which case the message receiver can update the message queue 108 and the control structure 105 on the sender side by performing a RDMA write operation.
  • a RDMA write operation can be performed in a manner without intervention from the client on the remote Machine B 102.
  • each server on the middleware machine can be provided with a receiver and a sender.
  • the communication between these two servers can be performed by different message senders at the different machines, using RDMA protocol such as RDMA write operations.
  • FIG. 2 illustrates an exemplary flow chart for providing message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • the system can provide a first message control data structure on a message receiver.
  • the system can associate a heap data structure in a shared memory with the message receiver.
  • the system allows a message sender to write a message directly into the heap data structure, and maintain metadata associated with the message in the first message control data structure.
  • a data structure based on a ring buffer can be the backbone of this system.
  • this ring structure can work as a first-in first-out (FIFO) queue.
  • FIG. 3 shows an illustration of providing remote memory rings for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • both the message sender 301 and the message receiver 302 can use a ring structure as a message control data structure, and each ring structure can have a head pointer and a tail pointer.
  • a message sender 301 operates to write a message into a message queue on a message reader 302, e.g. a heap data structure in a shared memory
  • the message sender 301 can update the head pointers 303 and 304 for both ring structures.
  • a message receiver 302, or a reader operates to read a message from the heap data structure in the shared memory, the reader updates the tail pointers 305 and 306 for both ring structures.
  • a head pointer in a ring structure points to the latest message added to the message queue and a tail pointer in a ring structure points to the oldest message in the message queue.
  • Active messages are stored between the head pointer and the tail pointer.
  • Message senders, or writers can look at the free space between the head pointer and the tail pointer of the queue (the white section of the ring structure in Figure 3) and move the head pointer forward as it writes a new message.
  • message readers can look between the head pointer and the tail pointer of the queue (the shadowed section of the ring structure in Figure 3) to get new messages and move the tail pointer forward as readers read a message. This ensures that both the head pointer and the tail pointer move only in a single direction.
  • the following restrictions can be maintained for each ring operation: only readers update tail pointer; only writers update head pointer; the section from the tail pointer to the head pointer in a ring structure contains valid unread messages; and the section from the head pointer to the tail pointer in a ring structure is always free.
  • the reader can read a message even when a writer writes to the ring, and synchronization is not required between the reader and the writer.
  • multiple message senders on different clients in a middleware machine environment can concurrently access a message queue on a server machine in the middleware machine environment.
  • FIG. 4 shows an illustration of a message queue that can be concurrently accessed by multiple message senders in a middleware machine environment, in accordance with an embodiment of the invention.
  • a server 401 can use a message queue 403 to concurrently handle service requests from multiple clients, e.g. Client A-D 41 1-414.
  • the message queue 403 can be maintained in a shared memory 402 on the server machine.
  • Each client can maintain a separate message queue 421-424, which can be a private copy of the message queue 403.
  • the different private copies of the message queue 403 i.e. message queues 421 -424) can be synchronized with the message queue 403, e.g. periodically, in order to ensure that each message queue 421-424 is timely updated.
  • a lock can be activated on a message queue, when the queue, or a particular entry in the queue, is currently being updated by a client. Since the queue is in a shared memory on the server machine, every other client can notice that the queue is locked and can be prevented from writing into a corresponding portion of memory that is associated with the particular entry in the queue. Furthermore, the sending of a message can be implemented by performing a RDMA write operation on the sending side. Hence, there is no need to implement a latch or a serialization mechanism on the receiving side for the lock in order to guarantee there is no confliction in writing and accessing the queue and its associated heap data structure in the shared memory.
  • the clients can race to get an access to the queue. Once a client obtains a lock on the queue, or a particular entry in the queue, other clients can wait for the release of the lock, e.g. using semaphore mechanism provided by the OS in a single node environment or using RDMA atomics and latchless mechanisms in a multinode environment.
  • a distributed transactional system can use a server-client model that allows clients to submit work to an available server. The clients can be provided with the results when the work is done. Work submission and its completions can be communicated using message queues.
  • System V message queues provide an efficient way of handling work submission and completion on a single machine in a distributed transactional environment, such as the Oracle Tuxedo environment. Furthermore, System V message queues can be extended for sharing work between multiple machines.
  • FIG. 5 shows an illustration of using System V message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • a shadow queue creation model can be applied over System V message queues in a middleware machine environment 500.
  • a message queue Q 51 1 is created on a node A 501
  • a broker on that node, broker A 504 can be informed of the existence of the message queue Q 51 1.
  • broker A 504 can talk to similar brokers on other nodes 502-503 and can make them create queues with same name - 'Q' - on each of the node in the cluster.
  • a process 507 on a node B 502 can write to a local message queue Q 512. Since node B is not the node where the message queue Q 51 1 was originally created, the broker process on node B can read the message from the message queue 512 and send the message to the broker A 504 on node A over network using TCP connections. Then, the broker A 504 can write the message into the message queue Q 51 1 on node A. In such a way, a process on any node can write to a queue created from any node without really knowing whether the queue is local or remote. Additionally, the broker A 504 on node A can continuously monitor all the shadow queues and propagate the messages written to any of the shadow queues into the node A where the original queue was created.
  • a transactional middleware system such as a Tuxedo system
  • a Tuxedo system can take advantage of fast machines with multiple processors, such as an Exalogic middleware machine, and a high performance network connection.
  • the system can provide the transactional middleware system, e.g. Oracle Tuxedo, with an ability of using an available RDMA capable IB network with Exalogic middleware machine.
  • RDMA can offload most of the CPU work associated with message transfer to the host channel adapter (HCA) and/or the network interface card (NIC).
  • HCA host channel adapter
  • NIC network interface card
  • the system can help Tuxedo to scale its transaction processing capacity on RDMA capable system, in a manner similar to the Exalogic machines.
  • the system can add RDMA capability to existing messaging infrastructure implementation so that users can run message queue over IB network using RDMA.
  • FIG. 6 shows an illustration of RDMA message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
  • a two-node message queue can use a remote ring structure to represent the message queue.
  • the remote ring structure consists on two normal ring structures: one ring structure 608 kept on the reader side and another ring structure 605 kept on the writer side.
  • a message sender 603 on a local machine, Machine A 601 can send a message to a message receiver 604 on a remote machine, Machine B 602, e.g. using RDMA protocol 620.
  • the message receiver can first create a queue in a shared memory in the remote machine and inform the network interface card the address of the queue in the shared memory.
  • the message queue can be implemented using a ring buffer data structure that includes a head pointer and tail pointer. Additionally, the message receiver can implement a heap data structure in the shared memory for containing incoming messages. Then, the message receiver can notify the message sender of the creation of the message queue as well as the address information of the heap data structure in the shared memory.
  • the system updates ring data and the head pointer on both ring structures.
  • the system can use RDMA to update the reader side structure if the reader is on a remote node.
  • readers can keep both rings updated as the readers are reading messages.
  • messages are not stored directly in the ring structure. Only metadata about where the actual message can be retrieved is kept in the ring structure. Messages are stored in a heap data structure 610 that is kept at the reader node. The actual message can be transferred from the writer process to the allocated memory on the reader node using a RDMA write operation 620.
  • the remote heap 610 implementation can support variable size messages. In this remote heap 610, allocation and freeing operations are done on the writer node, even though the actual heap memory is kept on the reader node. In an example, the heap memory 610 is on a reader node, while the entire heap metadata is stored on the writer node. Hence, it is possible to do heap allocation from writer's side without any network communication. Furthermore, heap management can be dissociated from the slot allocation mutex/step, to further minimize contention/simplify remote queue recovery.
  • rmsgptr allocate_heap(q->heap, msg->size); /* copy message to the reader side( DMA) */
  • msg read_msg_from_slot(q->ring, slot);
  • the entire queue operations can happen in the user mode by different client processes.
  • a process can exit abnormally while it is updating a shared ring structure or heap metadata, e.g. when it is executing get_next_slot/allocate ring slot.
  • a recovery mechanism can be used to detect the process death and make the metadata to consistent state so that other process can still operate on the same queue.
  • a wakeup mechanism can be provided.
  • the above pseudo code in Listing 1 outlines the steps that the system can perform in the case of a queue when it is created for a single priority.
  • the system also allows each message to have priorities and retrieval based on priorities.
  • a mechanism can be implemented based on RDMA to wake up processes that wait of specific requests.
  • Different client processes can read and/or write on a same queue.
  • the queue can be created on a shared memory (or a shared storage).
  • updating a shared data may require taking a mutex.
  • a method based on ring structure and atomic compare and swap (CAS) instructions can be implemented to avoid locks in the frequent read and write paths.
  • the use of RDMA for message transfer can reduce the memory bus utilization. This frees the CPU from the entire message transfer, so that the CPU can do other work while messages are being transferred. Furthermore, the system becomes more scalable with the bottleneck, such as the broker for System V message queues, removed. Thus, the use of RDMA provides substantial benefit in terms of CPU usage, message transfer throughput and message transfer latency.
  • the system can take advantage of message queues using RDMA for internode message transfer.
  • the system can use remote ring structures to do message read and write from different machines simultaneously.
  • the system can handle variable sized messages with remote heap allocation.
  • a recovery model can be used to recover queues in the case that an abnormal process exits on a local node or on a remote node. Queues are created on shared memory with devised mechanism to do local or RDMA operations on shared data.
  • the system can use a wake up mechanism based on RDMA for remote process that wait for a message, and concurrent readers and writers are allowed to operate on the same queues using latchless synchronization from user mode processes.
  • the system can provide an interface to do queue operations between different nodes by leveraging the RDMA facility available in modern network interface cards.
  • the programming interface provided by the interface can be similar to that of a System V API.
  • a daemon process on a server node in the middleware machine environment can be used to create and manage the message queue in the shared memory.
  • FIG. 7 shows an illustration of a daemon process that can create and manage a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
  • a middleware machine environment can include a server node 701 and several client nodes 702 and 703.
  • the server node can include a shared memory 704 for receiving messages from different clients, wherein the shared memory maintains one or more message queues 71 1 and 712.
  • the server node 301 can include a daemon process 306 that is responsible for creating the one or more message queues in the shared memory on the server, when the various clients request the server to set up the message queues for sending and receiving messages.
  • a daemon process 306 that is responsible for creating the one or more message queues in the shared memory on the server, when the various clients request the server to set up the message queues for sending and receiving messages.
  • the daemon process 706 on the server can dynamically create a Queue B 712 for communicating with Client B 703 via a message control structure 722.
  • this communication scheme between the server and multiple clients can be further extended using proxies.
  • the queue/control structure A 721 on Client A 702 can be extended using one or more proxies, e.g. Proxies l-lll 723-725. Using these proxies, the processes associated with the different proxies on Client A can use the queue/control structure A to communicate with the server.
  • the daemon process 706 on the server 701 can also create and reserve a local message queue, e.g. Queue C 708, for local messaging purpose.
  • the local server processes can communicate with each other using the local message queue, and the System V IPC protocol can be used instead of the RDMA protocol since the IPC protocol is faster than the RDMA protocol when it is used locally.
  • a local server process 707 can receive messages from a local message queue C 708 in addition to the remote message queues, such as Queue A 71 1 and Queue B 712.
  • the local server process 707 can handle the messages from the different message queues, without a need to address the difference between a local message queue and a remote message queue.
  • a client can determine whether a queue or a control structure on the client can be created in a shared memory or private memory. If the client chooses to create the queue or the control structure in a private memory of the client machine that is associated with a particular process, then the system can prevent other processes on the client machine and remote machines to access the control structure on the client. This can be beneficial since some messages can contain sensitive information such as custom financial information.
  • an interruption can occur on a server process or even the daemon process in a server. The client can continue performing RDMA write operations in the shared memory on the server machine without a need of waiting for the recovery of the server process or the daemon process. This makes the disaster recovery for the system robust and straight-forward. Additionally, the clients can stop writing into the shared memory on the server machine when the queue is full.
  • FIG. 8 illustrates an exemplary flow chart for creating and managing a message queue in a transactional middleware machine environment, in accordance with an embodiment of the invention.
  • a server can provide a shared memory on a message receiver, wherein the shared memory maintains one or more message queues in the middleware machine environment.
  • a client requests that the at least one message queue be set up on the server to support sending and receiving messages.
  • a daemon process on the server can dynamically create at least one message queue in the shared memory, when the server receives the client request.
  • a security model can be used to protect the message queue in the middleware machine environment.
  • FIG 9 shows an illustration of a security model that can be used to protect a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
  • a message receiver 902 can be configured to communicate with a message sender 901.
  • a daemon process 910 on the server node that is associated with the message receiver 902 can create a key or a security token 914, when the daemon process first creates a message queue 906 in a shared memory 904 on the server machine for communicating with the message sender 901.
  • the daemon process 910 can further register the key or the security token 914 with the IB network, and send the security token 914 to the message sender 910 on the client node via a secured network 920.
  • the message sender 901 can also be associated with a daemon process 905.
  • the message sender 901 can access the shared memory 904 in the receiver machine directly.
  • the message sender 901 on the client node can use the security token 914 to perform an RDMA write operation 921 for writing a message directly in a heap data structure 908 in the shared memory 904 on the receiver side.
  • FIG. 10 illustrates an exemplary flow chart for protecting a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
  • a daemon process on a message receiver can create a security token on a server node, when the daemon process first creates a message queue in a shared memory on the server node for communicating with a client node.
  • the daemon process on a message receiver can send the created security token from the server node to the client node via a secured network.
  • the message sender can directly write a message into the message queue in the shared memory.
  • the present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
  • the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
  • the storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

A middleware machine environment can provide message queues for multinode applications. The transactional middleware machine environment includes a message control data structure on a message receiver and a heap data structure in a shared memory that is associated with the message receiver. The message sender operates to write a message directly into the heap data structure, and to maintain metadata associated with the message in the message control data structure. Furthermore, the middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment. Additionally, the middleware machine environment includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages.

Description

SYSTEM AND METHOD FOR PROVIDING AND MANAGING MESSAGE QUEUES FOR MULTINODE APPLICATIONS IN A MIDDLEWARE MACHINE ENVIRONMENT Copyright Notice:
[0001] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
Field of Invention:
[0002] The present invention is generally related to computer systems and software such as middleware, and is particularly related to supporting a transactional middleware machine environment.
Background:
[0003] A transactional middleware system, or transaction oriented middleware, includes enterprise application servers that can process various transactions within an organization. With the developments in new technologies such as high performance network and multiprocessor computers, there is a need to further improve the performance of transactional middleware. These are the generally areas that embodiments of the invention are intended to address.
Summary:
[0004] Described herein are systems and methods for providing message queues in a middleware machine environment. The transactional middleware machine environment includes a message control data structure on a message receiver and a heap data structure in a shared memory that is associated with the message receiver. The message sender operates to write a message directly into the heap data structure, and to maintain metadata associated with the message in the message control data structure. Furthermore, the middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment. Additionally, the middleware machine environment includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages.
Brief Description of the Figures:
[0005] Figure 1 shows an illustration of providing message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention. [0006] Figure 2 illustrates an exemplary flow chart for supporting accurate load balance in a middleware machine environment, in accordance with an embodiment of the invention.
[0007] Figure 3 shows an illustration of providing remote memory rings for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
[0008] Figure 4 shows an illustration of a message queue that can be concurrently accessed by multiple message senders in a middleware machine environment, in accordance with an embodiment of the invention.
[0009] Figure 5 shows an illustration of using System V message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
[0010] Figure 6 shows an illustration of a remote direct memory access (RDMA) message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention.
[0011] Figure 7 shows an illustration of a daemon process that can create and manage a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
[0012] Figure 8 illustrates an exemplary flow chart for supporting accurate load balance in a transactional middleware machine environment, in accordance with an embodiment of the invention.
[0013] Figure 9 shows an illustration of a security model that can be used to protect a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
[0014] Figure 10 illustrates an exemplary flow chart for protecting a message queue in a middleware machine environment, in accordance with an embodiment of the invention.
Detailed Description:
[0015] Described herein is a system and method for supporting a transactional middleware system that can take advantage of fast machines with multiple processors, and a high performance network connection in a transactional middleware machine environment. The system can provide message queues for multinode applications using a data structure based on a ring buffer (a circular queue). The system includes a remote ring structure with a first ring structure on a reader and a second ring structure on a writer, wherein each of the first ring structure and the second ring structure has a head pointer and a tail pointer. When the writer operates to write a message to the remote ring, the writer can update the head pointers for both the first ring structure and the second ring structure, and the data in the remote ring structure. When the reader operates to read a message from the remote ring, the reader can update the tail pointers for both the first ring structure and the second ring structure. Additionally, the message can be stored in a heap data structure, while the metadata associated with the message can be stored in the remote ring structure.
[0016] In accordance with an embodiment of the invention, the system comprises a combination of high performance hardware, e.g. 64-bit processor technology, high performance large memory, and redundant InfiniBand and Ethernet networking, together with an application server or middleware environment, such as WebLogic Suite, to provide a complete Java EE application server complex which includes a massively parallel in-memory grid, that can be provisioned quickly, and can scale on demand. In accordance with an embodiment, the system can be deployed as a full, half, or quarter rack, or other configuration, that provides an application server grid, storage area network, and InfiniBand (IB) network. The middleware machine software can provide application server, middleware and other functionality such as, for example, WebLogic Server, JRockit or Hotspot JVM, Oracle Linux or Solaris, and Oracle VM. The system can include a plurality of compute nodes, IB switch gateway, and storage nodes or units, communicating with one another via an IB network. When implemented as a rack configuration, unused portions of the rack can be left empty or occupied by fillers.
[0017] In accordance with an embodiment of the invention, referred to herein as "Sun Oracle Exalogic" or "Exalogic", the system is an easy-to-deploy solution for hosting middleware or application server software, such as the Oracle Middleware SW suite, or Weblogic. As described herein, the system is a "grid in a box" that comprises one or more servers, storage units, an IB fabric for storage networking, and all the other components required to host a middleware application. Significant performance can be delivered for all types of middleware applications by leveraging a massively parallel grid architecture using, e.g. Real Application Clusters and Exalogic Open storage. The system delivers improved performance with linear I/O scalability, is simple to use and manage, and delivers mission-critical availability and reliability.
[0018] In accordance with an embodiment of the invention, Tuxedo is a set of software modules that enables the construction, execution, and administration of high performance, distributed business applications and has been used as transactional middleware by a number of multi-tier application development tools. Tuxedo is a middleware platform that can be used to manage distributed transaction processing in distributed computing environments. It is a proven platform for unlocking enterprise legacy applications and extending them to a services oriented architecture, while delivering unlimited scalability and standards-based interoperability.
[0019] In accordance with one embodiment of the invention, a middleware machine environment can provide message queues for multinode applications. The transactional middleware machine environment includes a message control data structure on a message receiver and a heap data structure in a shared memory that is associated with the message receiver. The message sender operates to write a message directly into the heap data structure, and to maintain metadata associated with the message in the message control data structure. Furthermore, the message control data structure can be a ring structure with a head pointer and a tail pointer. Additionally, the message receiver resides on a server that is connected with a plurality of clients, with each of said clients keeping a private copy of the message control data structure. Also, the message receiver can support concurrent access to the message control data structure associated with the message receiver.
[0020] In accordance with another embodiment of the invention, a middleware machine environment can manage message queues for multimode applications. The middleware machine environment includes a shared memory on a message receiver, wherein the shared memory maintains one or more message queues for the middleware machine environment. The middleware machine environment further includes a daemon process that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages. Additionally, different processes on a client operate to use at least one proxy to communicate with the message server. Furthermore, the middleware machine environment can protect message queues for multimode applications using a security token created by the daemon process.
Message Queues for Multinode Applications
[0021] In accordance with an embodiment of the invention, messaging software, such as messaging queues, can take advantage of a high performance network, such as an IB network using a remote direct memory access (RDMA) protocol. The RDMA protocol allows a message sender to bypass OS kernels and directly access to the memory without a need to wake up a process on the remote machine.
[0022] Figure 1 shows an illustration of providing message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 1 , a middleware machine environment 100 can include multiple server machines, such as Machine A 101 and Machine B 102. A message sender 103 on a local machine, e.g. Machine A 101 , can send a message 107 to a message receiver 104 on a remote machine, e.g. Machine B 102. The message receiver 104 on the remote Machine B 102 can use a shared memory 106 that includes a message queue or a message control data structure 108 and a heap data structure 110.
[0023] In accordance with an embodiment of the invention, a message queue can contain only the metadata information that is associated with the message, while the heap data structure contains the physical message. Thus, messages with variable size can be easily accommodated and be stored in the shared memory. As shown in Figure 1 , the message sender 103 operates to write the message directly into the heap data structure 110, and maintain metadata associated with the message in the message control data structure 108.
[0024] Also as shown in Figure 1 , the message sender 103 includes a message control data structure 105 on the local machine, Machine A 101 . The message control data structure 105 can be a copy of the message queue 108 for the message sender 103. The message sender on the local Machine A 101 can further maintain metadata associated with the message in the message control data structure 105 on the local Machine A 101.
[0025] In accordance with an embodiment of the invention, a message sender on a local Machine A 101 can directly write a message into heap data structure 1 10 in a shared memory 106 on the remote Machine B 102. As shown in Figure 1 , the message sender 103 can bypass the OS kernel on the remote Machine B 102, with the addressing information provided by the message receiver 104. Furthermore, the message sender 103 on the local Machine A 101 can update the status information of the message such as an input sequence number in the queue in the remote Machine B 102 via the control structure on the local Machine A 101 .
[0026] Furthermore, the message sender 103 on the local Machine A 101 can send a message to a message receiver 104 regardless the size of the message. Hence, this messaging mechanism can be cost effective, efficient, and requires less overhead for large volume of data.
[0027] Additionally, the message sender 103 can wake up a process 1 12 on the remote Machine B 102 that is associated with the message receiver 104, according to a pre-configured procedure. For example, the message sender can wake up the process when a service request message that can be handled by the process has been delivered. In another example, the message sender can wake up a daemon process on the remote Machine B 102 when the queue is full.
[0028] In accordance with an embodiment of the invention, before a process on the message receiver, e.g. process 1 12, goes to sleep, the process can notice the message control structure 105 on the client side, and/or provide a procedure to the message control structure 105 on how to wake itself up. Then, the process on the receiver side can wait for the delivery of the message. For example, a process that is expecting the message can be in a sleeping status until it is wakened up by the message sender.
[0029] Also as shown in Figure 1 , after a message receiver 104 consumes a message, the message receiver can take the message out from the queue, in which case the message receiver can update the message queue 108 and the control structure 105 on the sender side by performing a RDMA write operation. Such a RDMA write operation can be performed in a manner without intervention from the client on the remote Machine B 102.
[0030] In accordance with an embodiment of the invention, in order to support two-way communications between two servers in a middleware machine environment, each server on the middleware machine can be provided with a receiver and a sender. Thus, the communication between these two servers can be performed by different message senders at the different machines, using RDMA protocol such as RDMA write operations.
[0031] Figure 2 illustrates an exemplary flow chart for providing message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 2, at step 201 , the system can provide a first message control data structure on a message receiver. At step 202, the system can associate a heap data structure in a shared memory with the message receiver. Then, at 203, the system allows a message sender to write a message directly into the heap data structure, and maintain metadata associated with the message in the first message control data structure. Remote Memory Rings
[0032] In accordance with an embodiment of the invention, a data structure based on a ring buffer (a circular queue) can be the backbone of this system. In a simplified case, this ring structure can work as a first-in first-out (FIFO) queue.
[0033] Figure 3 shows an illustration of providing remote memory rings for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 3, both the message sender 301 and the message receiver 302 can use a ring structure as a message control data structure, and each ring structure can have a head pointer and a tail pointer. When a message sender 301 operates to write a message into a message queue on a message reader 302, e.g. a heap data structure in a shared memory, the message sender 301 can update the head pointers 303 and 304 for both ring structures. On the other hand, when a message receiver 302, or a reader, operates to read a message from the heap data structure in the shared memory, the reader updates the tail pointers 305 and 306 for both ring structures.
[0034] In accordance with an embodiment of the invention, a head pointer in a ring structure points to the latest message added to the message queue and a tail pointer in a ring structure points to the oldest message in the message queue. Active messages are stored between the head pointer and the tail pointer. Message senders, or writers, can look at the free space between the head pointer and the tail pointer of the queue (the white section of the ring structure in Figure 3) and move the head pointer forward as it writes a new message. On the other hand, message readers can look between the head pointer and the tail pointer of the queue (the shadowed section of the ring structure in Figure 3) to get new messages and move the tail pointer forward as readers read a message. This ensures that both the head pointer and the tail pointer move only in a single direction.
[0035] In accordance with an embodiment of the invention, the following restrictions can be maintained for each ring operation: only readers update tail pointer; only writers update head pointer; the section from the tail pointer to the head pointer in a ring structure contains valid unread messages; and the section from the head pointer to the tail pointer in a ring structure is always free. Thus, the reader can read a message even when a writer writes to the ring, and synchronization is not required between the reader and the writer.
Concurrent readers and writers
[0036] In accordance with an embodiment of the invention, multiple message senders on different clients in a middleware machine environment can concurrently access a message queue on a server machine in the middleware machine environment.
[0037] Figure 4 shows an illustration of a message queue that can be concurrently accessed by multiple message senders in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 4, a server 401 can use a message queue 403 to concurrently handle service requests from multiple clients, e.g. Client A-D 41 1-414. The message queue 403 can be maintained in a shared memory 402 on the server machine. Each client can maintain a separate message queue 421-424, which can be a private copy of the message queue 403. Furthermore, the different private copies of the message queue 403 (i.e. message queues 421 -424) can be synchronized with the message queue 403, e.g. periodically, in order to ensure that each message queue 421-424 is timely updated.
[0038] In accordance with an embodiment of the invention, a lock can be activated on a message queue, when the queue, or a particular entry in the queue, is currently being updated by a client. Since the queue is in a shared memory on the server machine, every other client can notice that the queue is locked and can be prevented from writing into a corresponding portion of memory that is associated with the particular entry in the queue. Furthermore, the sending of a message can be implemented by performing a RDMA write operation on the sending side. Hence, there is no need to implement a latch or a serialization mechanism on the receiving side for the lock in order to guarantee there is no confliction in writing and accessing the queue and its associated heap data structure in the shared memory.
[0039] In accordance with an embodiment of the invention, the clients can race to get an access to the queue. Once a client obtains a lock on the queue, or a particular entry in the queue, other clients can wait for the release of the lock, e.g. using semaphore mechanism provided by the OS in a single node environment or using RDMA atomics and latchless mechanisms in a multinode environment.
System V Message Queues
[0040] In accordance with an embodiment of the invention, a distributed transactional system can use a server-client model that allows clients to submit work to an available server. The clients can be provided with the results when the work is done. Work submission and its completions can be communicated using message queues. System V message queues provide an efficient way of handling work submission and completion on a single machine in a distributed transactional environment, such as the Oracle Tuxedo environment. Furthermore, System V message queues can be extended for sharing work between multiple machines.
[0041] Figure 5 shows an illustration of using System V message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 5, a shadow queue creation model can be applied over System V message queues in a middleware machine environment 500. When a message queue Q 51 1 is created on a node A 501 , a broker on that node, broker A 504, can be informed of the existence of the message queue Q 51 1. Then, broker A 504 can talk to similar brokers on other nodes 502-503 and can make them create queues with same name - 'Q' - on each of the node in the cluster.
[0042] In accordance with an embodiment of the invention, a process 507 on a node B 502 can write to a local message queue Q 512. Since node B is not the node where the message queue Q 51 1 was originally created, the broker process on node B can read the message from the message queue 512 and send the message to the broker A 504 on node A over network using TCP connections. Then, the broker A 504 can write the message into the message queue Q 51 1 on node A. In such a way, a process on any node can write to a queue created from any node without really knowing whether the queue is local or remote. Additionally, the broker A 504 on node A can continuously monitor all the shadow queues and propagate the messages written to any of the shadow queues into the node A where the original queue was created.
[0043] There are limitations associated with the above programming model, for example: 1 ) a message written from a remote node to a queue may require several (e.g. 5) memory copies to reach the destination queue. Thus, this model puts a lot of stress on the CPU bus; 2) when there are a large number of queues, the entire environment depends on the throughput of the broker, which can become a bottleneck; and 3) this model does not take advantage of an available RDMA network that can scale the transfer of messages. RDMA Message Queues
[0044] In accordance with an embodiment of the invention, a transactional middleware system, such as a Tuxedo system, can take advantage of fast machines with multiple processors, such as an Exalogic middleware machine, and a high performance network connection.
[0045] The system can provide the transactional middleware system, e.g. Oracle Tuxedo, with an ability of using an available RDMA capable IB network with Exalogic middleware machine. RDMA can offload most of the CPU work associated with message transfer to the host channel adapter (HCA) and/or the network interface card (NIC). The system can help Tuxedo to scale its transaction processing capacity on RDMA capable system, in a manner similar to the Exalogic machines. The system can add RDMA capability to existing messaging infrastructure implementation so that users can run message queue over IB network using RDMA.
[0046] Figure 6 shows an illustration of RDMA message queues for multinode applications in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 6, a two-node message queue can use a remote ring structure to represent the message queue. The remote ring structure consists on two normal ring structures: one ring structure 608 kept on the reader side and another ring structure 605 kept on the writer side. A message sender 603 on a local machine, Machine A 601 , can send a message to a message receiver 604 on a remote machine, Machine B 602, e.g. using RDMA protocol 620. [0047] In accordance with an embodiment of the invention, the message receiver can first create a queue in a shared memory in the remote machine and inform the network interface card the address of the queue in the shared memory. The message queue can be implemented using a ring buffer data structure that includes a head pointer and tail pointer. Additionally, the message receiver can implement a heap data structure in the shared memory for containing incoming messages. Then, the message receiver can notify the message sender of the creation of the message queue as well as the address information of the heap data structure in the shared memory.
[0048] Additionally, when a writer writes a new message to the message queue, the system updates ring data and the head pointer on both ring structures. The system can use RDMA to update the reader side structure if the reader is on a remote node. Likewise, readers can keep both rings updated as the readers are reading messages.
[0049] In accordance with an embodiment of the invention, messages are not stored directly in the ring structure. Only metadata about where the actual message can be retrieved is kept in the ring structure. Messages are stored in a heap data structure 610 that is kept at the reader node. The actual message can be transferred from the writer process to the allocated memory on the reader node using a RDMA write operation 620. The remote heap 610 implementation can support variable size messages. In this remote heap 610, allocation and freeing operations are done on the writer node, even though the actual heap memory is kept on the reader node. In an example, the heap memory 610 is on a reader node, while the entire heap metadata is stored on the writer node. Hence, it is possible to do heap allocation from writer's side without any network communication. Furthermore, heap management can be dissociated from the slot allocation mutex/step, to further minimize contention/simplify remote queue recovery.
[0050] The following Listing 1 contains pseudo code that illustrates the queue write and read operations when the queue is created without allowing message priorities and with the help of locks: msgwrite(q, msg)
/* get lock for writers */
getlock(q->writers)
/* allocate a ring slot */
slot = allocate_ring_slot(q->ring);
/* free old memory allocated for this slot */
free_heap(q->heap, slot);
/* allocate new memory */
rmsgptr = allocate_heap(q->heap, msg->size); /* copy message to the reader side( DMA) */
remote_copy_msg(q, rmsgptr, msg->data, msg->size);
/* update slot with message detail *
update_slot(q->ring, slot, rmsgptr, msg->size);
/* update slot at the remote side */
remote_update_slot(q->ring, slot);
/* update ring head */
q->ring->head++;
/* update ring head on remote side */
remote_update(q->ring->head);
/* free lock for writers */
putlock(q->writers);
}
msgread(q)
{
/* get lock for readers */
getlock(q->readers) /* get the next slot from tail */
slot = get_next_slot(q->ring);
/* read the message from location
pointed by ring entry at 'slot' */
msg = read_msg_from_slot(q->ring, slot);
/* update ring tail */
q->ring->tail++; /* update ring tail on writer side */
remote_update(q->ring->tail);
/* free lock */
putlock(q->writers); return msg;
}
Listing 1 [0051] In accordance with an embodiment of the invention, the entire queue operations can happen in the user mode by different client processes. A process can exit abnormally while it is updating a shared ring structure or heap metadata, e.g. when it is executing get_next_slot/allocate ring slot. A recovery mechanism can be used to detect the process death and make the metadata to consistent state so that other process can still operate on the same queue.
[0052] In accordance with an embodiment of the invention, a wakeup mechanism can be provided. The above pseudo code in Listing 1 outlines the steps that the system can perform in the case of a queue when it is created for a single priority. The system also allows each message to have priorities and retrieval based on priorities. Sometimes a client may ask for a message with some particular property - priority less than 'n' or equal to 'n' or not 'n' etc. If a message which can satisfy this request is not in the queue at the moment then the client process can be put into a sleep mode and waked up when a process from any node writes a message that can satisfy the request. A mechanism can be implemented based on RDMA to wake up processes that wait of specific requests.
[0053] Different client processes can read and/or write on a same queue. In such a scenario, the queue can be created on a shared memory (or a shared storage). In most of the shared memory based applications, updating a shared data may require taking a mutex. A method based on ring structure and atomic compare and swap (CAS) instructions can be implemented to avoid locks in the frequent read and write paths.
[0054] In accordance with an embodiment of the invention, the use of RDMA for message transfer can reduce the memory bus utilization. This frees the CPU from the entire message transfer, so that the CPU can do other work while messages are being transferred. Furthermore, the system becomes more scalable with the bottleneck, such as the broker for System V message queues, removed. Thus, the use of RDMA provides substantial benefit in terms of CPU usage, message transfer throughput and message transfer latency.
[0055] In accordance with an embodiment of the invention, the system can take advantage of message queues using RDMA for internode message transfer. The system can use remote ring structures to do message read and write from different machines simultaneously. The system can handle variable sized messages with remote heap allocation. A recovery model can be used to recover queues in the case that an abnormal process exits on a local node or on a remote node. Queues are created on shared memory with devised mechanism to do local or RDMA operations on shared data. The system can use a wake up mechanism based on RDMA for remote process that wait for a message, and concurrent readers and writers are allowed to operate on the same queues using latchless synchronization from user mode processes.
[0056] In accordance with an embodiment of the invention, the system can provide an interface to do queue operations between different nodes by leveraging the RDMA facility available in modern network interface cards. The programming interface provided by the interface can be similar to that of a System V API.
Message Queue Creation and Management
[0057] In accordance with an embodiment of the invention, a daemon process on a server node in the middleware machine environment can be used to create and manage the message queue in the shared memory.
[0058] Figure 7 shows an illustration of a daemon process that can create and manage a message queue in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 7, a middleware machine environment can include a server node 701 and several client nodes 702 and 703. The server node can include a shared memory 704 for receiving messages from different clients, wherein the shared memory maintains one or more message queues 71 1 and 712.
[0059] In accordance with an embodiment of the invention, the server node 301 can include a daemon process 306 that is responsible for creating the one or more message queues in the shared memory on the server, when the various clients request the server to set up the message queues for sending and receiving messages. For example, when Client B 703 initiates a connection with the server 701 , the daemon process 706 on the server can dynamically create a Queue B 712 for communicating with Client B 703 via a message control structure 722.
[0060] In accordance with an embodiment of the invention, this communication scheme between the server and multiple clients can be further extended using proxies. For example, the queue/control structure A 721 on Client A 702 can be extended using one or more proxies, e.g. Proxies l-lll 723-725. Using these proxies, the processes associated with the different proxies on Client A can use the queue/control structure A to communicate with the server.
[0061] Thus, a great scalability can be achieved in the middleware machine for supporting communication between different servers and clients using the RDMA protocol, since a message initiated from a process on Client A 702 can be sent to the server 701 by allowing the process to write the message directly into the heap data structure 705 on the server 701 , without server intervention.
[0062] In accordance with an embodiment of the invention, the daemon process 706 on the server 701 can also create and reserve a local message queue, e.g. Queue C 708, for local messaging purpose. In one example, the local server processes can communicate with each other using the local message queue, and the System V IPC protocol can be used instead of the RDMA protocol since the IPC protocol is faster than the RDMA protocol when it is used locally.
[0063] As shown in Figure 7, a local server process 707 can receive messages from a local message queue C 708 in addition to the remote message queues, such as Queue A 71 1 and Queue B 712. The local server process 707 can handle the messages from the different message queues, without a need to address the difference between a local message queue and a remote message queue.
[0064] In accordance with an embodiment of the invention, a client can determine whether a queue or a control structure on the client can be created in a shared memory or private memory. If the client chooses to create the queue or the control structure in a private memory of the client machine that is associated with a particular process, then the system can prevent other processes on the client machine and remote machines to access the control structure on the client. This can be beneficial since some messages can contain sensitive information such as custom financial information. [0065] In accordance with an embodiment of the invention, an interruption can occur on a server process or even the daemon process in a server. The client can continue performing RDMA write operations in the shared memory on the server machine without a need of waiting for the recovery of the server process or the daemon process. This makes the disaster recovery for the system robust and straight-forward. Additionally, the clients can stop writing into the shared memory on the server machine when the queue is full.
[0066] Figure 8 illustrates an exemplary flow chart for creating and managing a message queue in a transactional middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 8, at step 801 , a server can provide a shared memory on a message receiver, wherein the shared memory maintains one or more message queues in the middleware machine environment. Then, at step 802, a client requests that the at least one message queue be set up on the server to support sending and receiving messages. Finally, at step 803, a daemon process on the server can dynamically create at least one message queue in the shared memory, when the server receives the client request.
Security Model for Protecting a Message Queue
[0067] In accordance with an embodiment of the invention, a security model can be used to protect the message queue in the middleware machine environment.
[0068] Figure 9 shows an illustration of a security model that can be used to protect a message queue in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 9, a message receiver 902 can be configured to communicate with a message sender 901. A daemon process 910 on the server node that is associated with the message receiver 902 can create a key or a security token 914, when the daemon process first creates a message queue 906 in a shared memory 904 on the server machine for communicating with the message sender 901.
[0069] In accordance with an embodiment of the invention, the daemon process 910 can further register the key or the security token 914 with the IB network, and send the security token 914 to the message sender 910 on the client node via a secured network 920. As shown in Figure 9, the message sender 901 can also be associated with a daemon process 905. There can be a separate communication link, for example a dedicated process in the secured network 920, between the daemon process 905 on the message sender 901 and the daemon process 910 on the message receiver 902.
[0070] In accordance with an embodiment of the invention, after the message sender 901 receives the security token 914, the message sender 901 can access the shared memory 904 in the receiver machine directly. As shown in Figure 9, the message sender 901 on the client node can use the security token 914 to perform an RDMA write operation 921 for writing a message directly in a heap data structure 908 in the shared memory 904 on the receiver side.
[0071] Figure 10 illustrates an exemplary flow chart for protecting a message queue in a middleware machine environment, in accordance with an embodiment of the invention. As shown in Figure 10, at step 1001 , a daemon process on a message receiver can create a security token on a server node, when the daemon process first creates a message queue in a shared memory on the server node for communicating with a client node. Then, at step 1002, the daemon process on a message receiver can send the created security token from the server node to the client node via a secured network. Finally, at step 1003, after receiving the security token at the client side, the message sender can directly write a message into the message queue in the shared memory.
[0072] The present invention may be conveniently implemented using one or more conventional general purpose or specialized digital computer, computing device, machine, or microprocessor, including one or more processors, memory and/or computer readable storage media programmed according to the teachings of the present disclosure. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art.
[0073] In some embodiments, the present invention includes a computer program product which is a storage medium or computer readable medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of disk including floppy disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
[0074] The foregoing description of the present invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations will be apparent to the practitioner skilled in the art. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the art to understand the invention for various embodiments and with various modifications that are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalence.

Claims

Claims:
What is claimed is: 1. A system for providing message queues in a middleware machine environment, comprising:
one or more microprocessors;
a first message control data structure on a message receiver, and
a heap data structure in a shared memory that is associated with the message receiver, wherein a message sender, running on the one or more microprocessors, operates to write a message directly into the heap data structure, and
maintain metadata associated with the message in the first message control data structure.
2. The system according to Claim 1 , further comprising a second message control data structure on the message sender, wherein the message sender operates to maintain metadata associated with the message in the second message control data structure.
3. The system according to Claim 2, wherein the first message control data structure is a first ring structure.
4. The system according to Claim 3, wherein the second message control data structure is a second ring structure.
5. The system according to Claim 4, wherein each of the first ring structure and the second ring structure has a head pointer and a tail pointer.
6. The system according to Claim 5, wherein,
when the writer operates to write a message to the heap data structure in the shared memory, the writer updates the head pointers for both the first ring structure and the second ring structure, and
when the reader operates to read a message to the heap data structure in the shared memory, the reader updates the tail pointers for both the first ring structure and the second ring structure.
7. The system according to Claim 1 , wherein the message receiver is on a server that is connected with a plurality of clients, each said client keeps a private copy of the first message control data structure that is maintained in the shared memory.
8. The system according to Claim 7, wherein a lock is activated on the first message control data structure when an entry in the first message control data structure, is currently being updated by a client.
9. The system according to Claim 8, wherein every other client is capable of noticing that the first message control data structure is locked and being prevented from accessing a corresponding portion of the shared memory that is associated with the entry in the first message control data structure.
10. The system according to Claim 9, wherein another client is allowed to access another portion of the shared memory that is associated with the entry in the message queue.
11 . A method for providing message queues in a middleware machine environment, comprising:
providing a first message control data structure on a message receiver;
associating a heap data structure in a shared memory with the message receiver; allowing a message sender running on one or more microprocessors to
write a message directly into the heap data structure, and
maintain metadata associated with the message in the first message control data structure.
12. The method according to Claim 11 , further comprising associating a second message control data structure with the message sender, wherein the message sender operates to maintain metadata associated with the message in the second message control data structure.
13. The method according to Claim 12, further comprising allowing the first message control data structure to be a first ring structure.
14. The method according to Claim 13, further comprising allowing the second message control data structure to be a second ring structure.
15. The method according to Claim 14, further comprising allowing each of the first ring structure and the second ring structure to have a head pointer and a tail pointer.
16. The method according to Claim 15, further comprising:
allowing the writer to update the head pointers for both the first ring structure and the second ring structure, when the writer operates to write a message to the heap data structure in the shared memory; and
allowing the reader to update the tail pointers for both the first ring structure and the second ring structure, when the reader operates to read a message to the heap data structure in the shared memory.
17. The method according to Claim 11 , further comprising allowing the message receiver to reside on a server that is connected with a plurality of clients, each said client keeps a private copy of the first message control data structure that is maintained in the shared memory.
18. The method according to Claim 17, further comprising activating a lock on the first message control data structure when an entry in the first message control data structure, is currently being updated by a client.
19. The method according to Claim 18, further comprising allowing every other client to be capable of noticing that the first message control data structure is locked and being prevented from accessing a corresponding portion of the shared memory that is associated with the entry in the first message control data structure.
20. The method according to Claim 19, further comprising allowing another client to access another portion of the shared memory that is associated with the entry in the message queue.
21. A system for managing message queues in a middleware machine environment, comprising:
one or more microprocessors;
a shared memory on a message receiver, wherein the shared memory maintains one or more message queues in the middleware machine environment; and
a daemon process, running on the one or more microprocessors, that is capable of creating at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages.
22. The system according to Claim 21 , wherein different processes on a client operate to use at least one proxy to communicate with the message server.
23. The system according to Claim 21 , wherein the daemon process is further capable of creating a security token on a first server node, when a message queue is created in the shared memory on the first server node for communicating with a second server node.
24. The system according to Claim 23, wherein the daemon process is further capable of sending the security token from the first server node to the second server node via a first secured network.
25. The system according to Claim 24, wherein the daemon process is further capable of allowing the second server node to use the security token to send a message to the message queue in the shared memory.
26. The system according to Claim 21 , wherein the daemon process is capable of creating and reserving a local message queue for local messaging.
27. The system according to Claim 26, wherein the local message queue is outside of the shared memory.
28. The system according to Claim 26, wherein a local server process operates to receive messages from both the local message queue and the at least one remote message queue.
29. The system according to Claim 21 , wherein a client is allowed to determine whether a queue is created on a shared memory or a private memory.
30. The system according to Claim 21 , wherein a client is allowed to continue to perform write operations in the shared memory on a server machine without waiting for a recovery, when an interruption occurs on the server machine.
31. A method for managing message queues in a middleware machine environment, comprising:
providing a shared memory on a message receiver, wherein the shared memory maintains one or more message queues in the middleware machine environment; and
creating, via a daemon process running on one or more microprocessors, at least one message queue in the shared memory, when a client requests that the at least one message queue be set up to support sending and receiving messages.
32. The method according to Claim 31 , further comprising allowing different processes on a client to use at least one proxy to communicate with the message server.
33. The method according to Claim 31 , further comprising creating a security token on a first server node, when a message queue is created in the shared memory on the first server node for communicating with a second server node.
34. The method according to Claim 33, further comprising sending the security token from the first server node to the second server node via a first secured network.
35. The method according to Claim 34, further comprising allowing the second server node to use the security token to send a message to the message queue in the shared memory.
36. The method according to Claim 31 , further comprising creating and reserving a local message queue for local messaging.
37. The method according to Claim 36, further comprising allowing the local message queue be outside of the shared memory.
38. The method according to Claim 36, further comprising allowing a local server process to receive messages from both the local message queue and the at least one remote message queue.
39. The method according to Claim 31 , further comprising allowing a client to determine whether a queue is created on a shared memory or a private memory.
40. The method according to Claim 31 , further comprising allowing a client to continue perform write operations in the shared memory on a server machine without waiting for a recovery, when an interruption occurs on the server machine.
PCT/US2012/057634 2011-09-30 2012-09-27 System and method for providing and managing message queues for multinode applications in a middleware machine environment WO2013049399A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
IN1390CHN2014 IN2014CN01390A (en) 2011-09-30 2012-09-27
EP12773178.4A EP2761454A1 (en) 2011-09-30 2012-09-27 System and method for providing and managing message queues for multinode applications in a middleware machine environment
JP2014533333A JP6238898B2 (en) 2011-09-30 2012-09-27 System and method for providing and managing message queues for multi-node applications in a middleware machine environment
CN201280047474.0A CN103827829B (en) 2011-09-30 2012-09-27 System and method for providing and managing message queues for multinode applications in a middleware machine environment
KR1020147009464A KR102011949B1 (en) 2011-09-30 2012-09-27 System and method for providing and managing message queues for multinode applications in a middleware machine environment

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201161542119P 2011-09-30 2011-09-30
US61/542,119 2011-09-30
US13/572,491 US9996403B2 (en) 2011-09-30 2012-08-10 System and method for providing message queues for multinode applications in a middleware machine environment
US13/572,491 2012-08-10
US13/572,501 US9558048B2 (en) 2011-09-30 2012-08-10 System and method for managing message queues for multinode applications in a transactional middleware machine environment
US13/572,501 2012-08-10

Publications (1)

Publication Number Publication Date
WO2013049399A1 true WO2013049399A1 (en) 2013-04-04

Family

ID=47993694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/057634 WO2013049399A1 (en) 2011-09-30 2012-09-27 System and method for providing and managing message queues for multinode applications in a middleware machine environment

Country Status (7)

Country Link
US (2) US9996403B2 (en)
EP (1) EP2761454A1 (en)
JP (2) JP6238898B2 (en)
KR (1) KR102011949B1 (en)
CN (1) CN103827829B (en)
IN (1) IN2014CN01390A (en)
WO (1) WO2013049399A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133728A (en) * 2013-12-16 2014-11-05 腾讯科技(深圳)有限公司 Method and device for communication between processes
GB2532227A (en) * 2014-11-12 2016-05-18 Arm Ip Ltd Method of communication between a remote resource and a data processing device
CN105933408A (en) * 2016-04-20 2016-09-07 中国银联股份有限公司 Implementation method and device of Redis universal middleware
CN111629054A (en) * 2020-05-27 2020-09-04 北京金山云网络技术有限公司 Message processing method, device and system, electronic equipment and readable storage medium
JP2020161152A (en) * 2014-05-21 2020-10-01 オラクル・インターナショナル・コーポレイション System and method for supporting distributed data structure in distributed data grid

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996403B2 (en) * 2011-09-30 2018-06-12 Oracle International Corporation System and method for providing message queues for multinode applications in a middleware machine environment
US9495325B2 (en) * 2013-12-30 2016-11-15 International Business Machines Corporation Remote direct memory access (RDMA) high performance producer-consumer message processing
DE112014006838T5 (en) * 2014-07-29 2017-04-20 Mitsubishi Electric Corporation Display Operating System
US9342384B1 (en) * 2014-12-18 2016-05-17 Intel Corporation Function callback mechanism between a central processing unit (CPU) and an auxiliary processor
US9792248B2 (en) 2015-06-02 2017-10-17 Microsoft Technology Licensing, Llc Fast read/write between networked computers via RDMA-based RPC requests
US10509764B1 (en) * 2015-06-19 2019-12-17 Amazon Technologies, Inc. Flexible remote direct memory access
US10725963B2 (en) 2015-09-12 2020-07-28 Microsoft Technology Licensing, Llc Distributed lock-free RDMA-based memory allocation and de-allocation
US10713210B2 (en) 2015-10-13 2020-07-14 Microsoft Technology Licensing, Llc Distributed self-directed lock-free RDMA-based B-tree key-value manager
US10169116B2 (en) 2015-10-21 2019-01-01 International Business Machines Corporation Implementing temporary message queues using a shared medium
CN105446936B (en) * 2015-11-16 2018-07-03 上海交通大学 Distributed hashtable method based on HTM and unidirectional RDMA operation
US10375167B2 (en) 2015-11-20 2019-08-06 Microsoft Technology Licensing, Llc Low latency RDMA-based distributed storage
US10044595B1 (en) * 2016-06-30 2018-08-07 Dell Products L.P. Systems and methods of tuning a message queue environment
CN107612950B (en) * 2016-07-11 2021-02-05 阿里巴巴集团控股有限公司 Method, device and system for providing service and electronic equipment
EP3491792B1 (en) 2016-07-29 2021-02-17 Hewlett-Packard Enterprise Development LP Deliver an ingress packet to a queue at a gateway device
US10313282B1 (en) * 2016-10-20 2019-06-04 Sprint Communications Company L.P. Flexible middleware messaging system
US10198397B2 (en) 2016-11-18 2019-02-05 Microsoft Technology Licensing, Llc Flow control in remote direct memory access data communications with mirroring of ring buffers
CN106789431B (en) * 2016-12-26 2019-12-06 中国银联股份有限公司 Overtime monitoring method and device
WO2018129706A1 (en) * 2017-01-13 2018-07-19 Oracle International Corporation System and method for conditional call path monitoring in a distributed transactional middleware environment
CN109032821B (en) * 2018-08-27 2021-12-28 百度在线网络技术(北京)有限公司 Automatic driving subject message processing method, device, equipment and storage medium
CN111327511B (en) * 2018-12-14 2022-04-12 北京京东尚科信息技术有限公司 Instant messaging method, system, terminal equipment and storage medium
CN109815029B (en) * 2019-01-10 2023-03-28 西北工业大学 Method for realizing communication between partitions of embedded partition operating system
JP2020154805A (en) 2019-03-20 2020-09-24 キオクシア株式会社 Multiprocessor system and shared memory control method
CN110109873B (en) * 2019-05-08 2023-04-07 重庆大学 File management method for message queue
US11520572B2 (en) * 2019-09-13 2022-12-06 Oracle International Corporation Application of scheduled patches
CN110955535B (en) * 2019-11-07 2022-03-22 浪潮(北京)电子信息产业有限公司 Method and related device for calling FPGA (field programmable Gate array) equipment by multi-service request process
CN113626221B (en) * 2021-08-10 2024-03-15 迈普通信技术股份有限公司 Message enqueuing method and device
CN113742112B (en) * 2021-09-15 2024-04-16 武汉联影智融医疗科技有限公司 Electrocardiogram image generation method, system and electronic device
CN114584566A (en) * 2022-02-16 2022-06-03 深圳金融电子结算中心有限公司 Data processing method, device and equipment based on message queue and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6557056B1 (en) * 1998-12-30 2003-04-29 Nortel Networks Limited Method and apparatus for exchanging data between transactional and non-transactional input/output systems in a multi-processing, shared memory environment
US6766358B1 (en) * 1999-10-25 2004-07-20 Silicon Graphics, Inc. Exchanging messages between computer systems communicatively coupled in a computer system network
US20080069098A1 (en) * 2006-09-15 2008-03-20 Shah Mehul A Group communication system and method

Family Cites Families (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0473714A1 (en) * 1989-05-26 1992-03-11 Massachusetts Institute Of Technology Parallel multithreaded data processing system
JPH0520100A (en) * 1991-07-11 1993-01-29 Mitsubishi Electric Corp Operating system
JPH0721038A (en) 1993-06-30 1995-01-24 Hitachi Ltd Inter-program communication method
JPH07152709A (en) 1993-11-30 1995-06-16 Hitachi Ltd Inter-processor communication method
US5784615A (en) 1994-12-13 1998-07-21 Microsoft Corporation Computer system messaging architecture
JPH08212180A (en) * 1995-02-08 1996-08-20 Oki Electric Ind Co Ltd Inter-process communication processor
US5961651A (en) 1996-04-15 1999-10-05 Sun Microsystems, Inc. Event notification in a computing system having a plurality of storage devices
US5916307A (en) * 1996-06-05 1999-06-29 New Era Of Networks, Inc. Method and structure for balanced queue communication between nodes in a distributed computing application
US5951657A (en) * 1996-06-19 1999-09-14 Wisconsin Alumni Research Foundation Cacheable interface control registers for high speed data transfer
US6047391A (en) * 1997-09-29 2000-04-04 Honeywell International Inc. Method for strong partitioning of a multi-processor VME backplane bus
US6215792B1 (en) * 1998-06-30 2001-04-10 Motorola, Inc. System, device, and method for initial ranging in a communication network
US6667972B1 (en) * 1999-01-08 2003-12-23 Cisco Technology, Inc. Method and apparatus providing multi-service connections within a data communications device
JP3437933B2 (en) 1999-01-21 2003-08-18 インターナショナル・ビジネス・マシーンズ・コーポレーション Browser sharing method and system
US7970898B2 (en) * 2001-01-24 2011-06-28 Telecommunication Systems, Inc. System and method to publish information from servers to remote monitor devices
US6847991B1 (en) * 2000-09-06 2005-01-25 Cisco Technology, Inc. Data communication among processes of a network component
GB0028237D0 (en) * 2000-11-18 2001-01-03 Ibm Method and apparatus for communication of message data
US6985951B2 (en) 2001-03-08 2006-01-10 International Business Machines Corporation Inter-partition message passing method, system and program product for managing workload in a partitioned processing environment
US20020129172A1 (en) * 2001-03-08 2002-09-12 International Business Machines Corporation Inter-partition message passing method, system and program product for a shared I/O driver
US7248585B2 (en) 2001-10-22 2007-07-24 Sun Microsystems, Inc. Method and apparatus for a packet classifier
US6871265B1 (en) 2002-02-20 2005-03-22 Cisco Technology, Inc. Method and apparatus for maintaining netflow statistics using an associative memory to identify and maintain netflows
US7822980B2 (en) 2002-03-15 2010-10-26 International Business Machines Corporation Authenticated identity propagation and translation within a multiple computing unit environment
US7330927B1 (en) * 2003-05-07 2008-02-12 Avago Technologies General Ip (Singapore) Pte. Ltd. Apparatus and methodology for a pointer manager
KR100887179B1 (en) 2003-05-27 2009-03-10 인터내셔널 비지네스 머신즈 코포레이션 System for defining an alternate channel routing mechanism in a messaging middleware environment
GB0328576D0 (en) 2003-12-10 2004-01-14 Ibm Method and apparatus for browsing a list of data items
US7953903B1 (en) * 2004-02-13 2011-05-31 Habanero Holdings, Inc. Real time detection of changed resources for provisioning and management of fabric-backplane enterprise servers
US20050251856A1 (en) * 2004-03-11 2005-11-10 Aep Networks Network access using multiple authentication realms
JP2005284840A (en) 2004-03-30 2005-10-13 Matsushita Electric Ind Co Ltd Message communication circuit, message transmission method, message management method, and message communication system
US7613813B2 (en) * 2004-09-10 2009-11-03 Cavium Networks, Inc. Method and apparatus for reducing host overhead in a socket server implementation
US7546613B2 (en) * 2004-09-14 2009-06-09 Oracle International Corporation Methods and systems for efficient queue propagation using a single protocol-based remote procedure call to stream a batch of messages
US7882317B2 (en) * 2004-12-06 2011-02-01 Microsoft Corporation Process isolation using protection domains
WO2006073969A2 (en) 2005-01-06 2006-07-13 Tervela, Inc. Intelligent messaging application programming interface
US7987306B2 (en) * 2005-04-04 2011-07-26 Oracle America, Inc. Hiding system latencies in a throughput networking system
KR100725066B1 (en) 2005-08-02 2007-06-08 한미아이티 주식회사 A system server for data communication with multiple clients and a data processing method
US8196150B2 (en) * 2005-10-07 2012-06-05 Oracle International Corporation Event locality using queue services
US8255455B2 (en) 2005-12-30 2012-08-28 Sap Ag Method and system for message oriented middleware virtual provider distribution
US7464121B2 (en) * 2006-01-06 2008-12-09 International Business Machines Corporation Apparatus for sending a sequence of asynchronous messages to the same member of a clustered consumer
JP2007304786A (en) 2006-05-10 2007-11-22 Nec Corp Method for copying host memory between computer apparatus, computer apparatus, and computer program
US8122144B2 (en) 2006-06-27 2012-02-21 International Business Machines Corporation Reliable messaging using redundant message streams in a high speed, low latency data communications environment
US7996583B2 (en) * 2006-08-31 2011-08-09 Cisco Technology, Inc. Multiple context single logic virtual host channel adapter supporting multiple transport protocols
US7949815B2 (en) * 2006-09-27 2011-05-24 Intel Corporation Virtual heterogeneous channel for message passing
US7921427B2 (en) 2007-03-27 2011-04-05 Oracle America, Inc. Method and system for processing messages in an application cluster
JP2009199365A (en) * 2008-02-21 2009-09-03 Funai Electric Co Ltd Multi-task processing system
US8849988B2 (en) * 2008-11-25 2014-09-30 Citrix Systems, Inc. Systems and methods to monitor an access gateway
JP2010165022A (en) 2009-01-13 2010-07-29 Ricoh Co Ltd Inter-processor communication device, inter-processor communication method, program, and recording medium
US20100250684A1 (en) * 2009-03-30 2010-09-30 International Business Machines Corporation High availability method and apparatus for shared resources
JP2011008678A (en) 2009-06-29 2011-01-13 Hitachi Ltd Data transfer device and computer system
US20110030039A1 (en) 2009-07-31 2011-02-03 Eric Bilange Device, method and apparatus for authentication on untrusted networks via trusted networks
US20120221621A1 (en) * 2009-10-15 2012-08-30 Tomoyoshi Sugawara Distributed system, communication means selection method, and communication means selection program
US9094210B2 (en) 2009-10-26 2015-07-28 Citrix Systems, Inc. Systems and methods to secure a virtual appliance
CN101719960B (en) 2009-12-01 2012-07-11 中国电信股份有限公司 Communication device and cdma terminal
US8819701B2 (en) 2009-12-12 2014-08-26 Microsoft Corporation Cloud computing monitoring and management system
US8667575B2 (en) 2009-12-23 2014-03-04 Citrix Systems, Inc. Systems and methods for AAA-traffic management information sharing across cores in a multi-core system
US9081501B2 (en) * 2010-01-08 2015-07-14 International Business Machines Corporation Multi-petascale highly efficient parallel supercomputer
EP2569718B1 (en) 2010-05-11 2018-07-11 Intel Corporation Recording dirty information in software distributed shared memory systems
US8607200B2 (en) * 2010-06-01 2013-12-10 Red Hat, Inc. Executing a web application at different stages in the application life cycle
US8667505B2 (en) * 2010-09-14 2014-03-04 Microsoft Corporation Message queue management
US8738860B1 (en) * 2010-10-25 2014-05-27 Tilera Corporation Computing in parallel processing environments
US8924964B2 (en) 2010-11-01 2014-12-30 Microsoft Corporation Dynamic allocation and assignment of virtual environment
US8595715B2 (en) 2010-12-31 2013-11-26 International Business Machines Corporation Dynamic software version selection
US9100443B2 (en) * 2011-01-11 2015-08-04 International Business Machines Corporation Communication protocol for virtual input/output server (VIOS) cluster communication
US8839267B2 (en) 2011-02-21 2014-09-16 Universidade Da Coruna-Otri Method and middleware for efficient messaging on clusters of multi-core processors
US9141527B2 (en) * 2011-02-25 2015-09-22 Intelligent Intellectual Property Holdings 2 Llc Managing cache pools
US8677031B2 (en) 2011-03-31 2014-03-18 Intel Corporation Facilitating, at least in part, by circuitry, accessing of at least one controller command interface
US8533734B2 (en) 2011-04-04 2013-09-10 International Business Machines Corporation Application programming interface for managing time sharing option address space
US20120331153A1 (en) 2011-06-22 2012-12-27 International Business Machines Corporation Establishing A Data Communications Connection Between A Lightweight Kernel In A Compute Node Of A Parallel Computer And An Input-Output ('I/O') Node Of The Parallel Computer
US8806269B2 (en) 2011-06-28 2014-08-12 International Business Machines Corporation Unified, workload-optimized, adaptive RAS for hybrid systems
US10216553B2 (en) * 2011-06-30 2019-02-26 International Business Machines Corporation Message oriented middleware with integrated rules engine
US9996403B2 (en) * 2011-09-30 2018-06-12 Oracle International Corporation System and method for providing message queues for multinode applications in a middleware machine environment
WO2013048477A1 (en) * 2011-09-30 2013-04-04 Intel Corporation Direct i/o access for system co-processors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6557056B1 (en) * 1998-12-30 2003-04-29 Nortel Networks Limited Method and apparatus for exchanging data between transactional and non-transactional input/output systems in a multi-processing, shared memory environment
US6766358B1 (en) * 1999-10-25 2004-07-20 Silicon Graphics, Inc. Exchanging messages between computer systems communicatively coupled in a computer system network
US20080069098A1 (en) * 2006-09-15 2008-03-20 Shah Mehul A Group communication system and method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133728A (en) * 2013-12-16 2014-11-05 腾讯科技(深圳)有限公司 Method and device for communication between processes
JP2020161152A (en) * 2014-05-21 2020-10-01 オラクル・インターナショナル・コーポレイション System and method for supporting distributed data structure in distributed data grid
GB2532227A (en) * 2014-11-12 2016-05-18 Arm Ip Ltd Method of communication between a remote resource and a data processing device
US10922155B2 (en) 2014-11-12 2021-02-16 Arm Ip Limited Methods of communication between a remote resource and a data processing device
GB2532227B (en) * 2014-11-12 2021-10-27 Arm Ip Ltd Methods of communication between a remote resource and a data processing device
CN105933408A (en) * 2016-04-20 2016-09-07 中国银联股份有限公司 Implementation method and device of Redis universal middleware
CN111629054A (en) * 2020-05-27 2020-09-04 北京金山云网络技术有限公司 Message processing method, device and system, electronic equipment and readable storage medium
CN111629054B (en) * 2020-05-27 2022-06-03 北京金山云网络技术有限公司 Message processing method, device and system, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
JP6238898B2 (en) 2017-11-29
CN103827829B (en) 2017-03-22
JP6549663B2 (en) 2019-07-24
KR102011949B1 (en) 2019-08-19
JP2017208145A (en) 2017-11-24
US9558048B2 (en) 2017-01-31
US9996403B2 (en) 2018-06-12
KR20140069126A (en) 2014-06-09
JP2014531687A (en) 2014-11-27
US20130086199A1 (en) 2013-04-04
US20130086183A1 (en) 2013-04-04
EP2761454A1 (en) 2014-08-06
IN2014CN01390A (en) 2015-05-08
CN103827829A (en) 2014-05-28

Similar Documents

Publication Publication Date Title
US9996403B2 (en) System and method for providing message queues for multinode applications in a middleware machine environment
EP3776162B1 (en) Group-based data replication in multi-tenant storage systems
CA2834146C (en) Managing message queues
US7949815B2 (en) Virtual heterogeneous channel for message passing
US20140082170A1 (en) System and method for small batching processing of usage requests
US10133489B2 (en) System and method for supporting a low contention queue in a distributed data grid
US9672038B2 (en) System and method for supporting a scalable concurrent queue in a distributed data grid
EP2761493B1 (en) System and method for supporting a complex message header in a transactional middleware machine environment
CN111897666A (en) Method, device and system for communication among multiple processes
US9910808B2 (en) Reflective memory bridge for external computing nodes
EP2761822B1 (en) System and method for supporting different message queues in a transactional middleware machine environment
CN117176811A (en) Blocking type asynchronous monitoring multi-client instruction and multi-hardware control server architecture, communication system and method
US10762011B2 (en) Reflective memory bridge for external computing nodes
Ong Network virtual memory
Alves et al. Scalable multithreading in a low latency Myrinet cluster
Gulati Reducing the Inter-Process Communication Time on Local Host by Implementing Seamless Socket like,“low latency” Interface over Shared Memory

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12773178

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014533333

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20147009464

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2012773178

Country of ref document: EP