AU2011265444B2 - Low latency FIFO messaging system - Google Patents

Low latency FIFO messaging system Download PDF

Info

Publication number
AU2011265444B2
AU2011265444B2 AU2011265444A AU2011265444A AU2011265444B2 AU 2011265444 B2 AU2011265444 B2 AU 2011265444B2 AU 2011265444 A AU2011265444 A AU 2011265444A AU 2011265444 A AU2011265444 A AU 2011265444A AU 2011265444 B2 AU2011265444 B2 AU 2011265444B2
Authority
AU
Australia
Prior art keywords
rdma
message
messages
remote
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
AU2011265444A
Other versions
AU2011265444A1 (en
Inventor
Nishant AGRAWAL
Manoj Karunakaran Nambiar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tata Consultancy Services Ltd
Original Assignee
Tata Consultancy Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tata Consultancy Services Ltd filed Critical Tata Consultancy Services Ltd
Publication of AU2011265444A1 publication Critical patent/AU2011265444A1/en
Application granted granted Critical
Publication of AU2011265444B2 publication Critical patent/AU2011265444B2/en
Priority to AU2016201513A priority Critical patent/AU2016201513B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

LOW LATENCY FIFO MESSAGING SYSTEM A system for lockless remote messaging in an inter-process communication between processing nodes as implemented by RDMA supported Network Interface Card is presented. The inter-process communication is implemented by using RDMA write operations accessed through infiniband verbs library or Ethernet. This provides a direct access to the RDMA enabled NIC without a system call overhead to achieve low latency for remote messaging requirement and high messaging rates. The RDMA NIC receives the messages in bulk as the remote sender process bundles together plurality of messages for reducing the number of work requests per message transmitted and acknowledged. This requires memory mapped structure hosted on the communicating processing nodes to be synchronized by RDMA.

Description

LOW LATENCY FIFO MESSAGING SYSTEM FIELD OF THE INVENTION The present invention relates to the field of inter processor messaging and more particularly relates to low latency remote messaging assisted by Remote Direct Access Memory based first-in-first-out system. BACKGROUND OF THE INVENTION With the advent of computing acceleration, the exchange of data between two different software threads or processor cores demands to be fast and efficient. In general, the existing methods of remote messaging over a typical TCP/IP schemes have the disadvantage of high CPU utilization for sending and receiving of messages. In the messaging TCP/IP paradigm, a software thread does not share any common memory space with another software thread it desires to communicate with. Instead, sending and receiving a message to and from another software thread requires the use of socket send () and socket recv () system call respectively. This aspect of communication via typical TCP/IP scheme involves a lot of software instructions that are to be executed by CPU cores residing on both the sending and remote hosts. Additionally, every time a send () system call is executed there is a change of context from user level to system level which amounts to a high CPU overhead. So is the case for the receive system call on the receiving end. Since the amount of data that should be exchanged between two different software threads has swelled up, the message FIFO between two processor cores needs to be low latency so that the processors need not slow down due to frequent communication. With TCP/IP protocol in place, it is very difficult to achieve low latency for messaging at high message rates because of the system calls that needs to be executed by the application process in order to facilitate message exchange between the sending and receiving peers. This implies that the messaging infrastructure (including software) should be capable of processing very large workloads. Very large workloads imply more than a million messages per second. Accordingly, keeping in view of the amount of work load the current messaging system presently demands and the future anticipated workload, a new system which ensures low latency messaging and optimized throughput is urgently required. Thus, in the light of the above mentioned background of the art, it is evident that there is a need for a system and method which: * Provides high throughput and low latency messaging technique for an inter-processes communication between at least two processes running on at least two nodes. " increases the throughput optimization of the messaging system; " reduces the latencies of the messaging system; * requires minimum infrastructure; e reduces the cost of the hardware setup to improve throughput and reduce the latency of the messaging system; and * is easy to deploy on existing systems.
OBJECTIVES OF THE INVENTION The principle object of the present invention is to provide a system for messaging high throughput optimization in an inter-processes communication across the network with lower latencies on higher workloads between the processes running on the remote nodes. Another significant object of the invention is to provide a high throughput and low latency messaging system in an inter-processes communication between the multiple processes running on the remote nodes. It is another object of the present invention to provide a cost effective high throughput and low latency messaging system in an inter-processes communication between the processes running on the remote nodes. Another object of the invention is to provide a system employing minimal computational resources by reducing CPU intervention for high throughput and low latency messaging and making it more available for application programs. Yet another object of the invention is to provide an inter-process messaging system requiring minimal infrastructure support by eliminating the need for additional receiver required for receiving messages at remote host thereby reducing one latency introducing component. Yet another object of the invention is to reduce the number of additional message copies required for realizing high throughput and low latency message passing technique in an inter-processes communication. 2 SUMMARY OF THE INVENTION Before the present methods, systems, and hardware enablement are described, it is to be understood that this invention in not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments of the present invention and which are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present invention which will be limited only by the appended claims. The present Invention envisages a system and method for low latency and high throughput messaging in inter process communication between the processes running on remote nodes. In the preferred embodiment of the invention the system implements an asynchronous lockless FIFO message queue between two server hosts using Random Direct Memory Access (RDMA) technology. The inter-process communication is implemented by using RDMA write operations accessed through infiniband verbs library that obviates the use of TCP/IP scheme provided by the operating system for remote messaging which involves a high system call overhead. The present system, on the contrary, provides a direct access to the RDMA enable Network Interface Cards (NIC) without a system call overhead which is a key to achieving very low latency messaging. The RDMA NIC converts RDMA write operations into a series of RDMA protocol messages over TCP/IP which is acted upon by the RDMA NIC in the remote host and makes the necessary updates to the memory of the remote host. According to one of the preferred embodiments of the present invention a system for lockless remote messaging in an inter-process communication between at least two processes running on at least two nodes implemented by RDMA supported Network Interface Card configured to synchronize a memory mapped file hosted on each of the said node is provided, the system comprising of: a) a sending host node communicatively coupled with a receiving host node for sending and receiving messages over a computing network respectively; b) RDMA supported Network Interface Card deployed on each of the said host nodes for executing RDMA commands; c) a storage hosted on each the host node adapted to store inter process messages invoked by either of the communicatively coupled host nodes; d) a first memory mapped file hosted on the sending host node configured to synchronize static circular queue of messages with a second memory mapped file hosted on the receiving host node and vice versa; and e) at least one remote sender process running on the sending host node for constituting at least one batch of message and asynchronously sending the batch along with a corresponding RDMA work request, wherein the batch constitution involves a coordination between an operational status of the sending host node and a bulking variable to determine the number of messages within the batch, 3 wherein inclusion of an additional message in the batch is further determined by a predetermined aclat parameter. According to one of the other preferred embodiments of the present invention, a memory mapped structure comprising computer executable program code is provided, wherein the said structure is configured to synchronize static circular queue of messages between the sending and receiving host nodes, the structure comprising: a) plurality of messages bundled together to form at least one batch, each batch comprising of a sequence of payload section, wherein each of the payload section is intermittently followed by a corresponding node counter structure to constitute a contiguous memory region, wherein the payload section is further coupled with a common queue data and continuously arranged headers; b) a rdma free pointing element adapted to point to a message buffer in which the sending host node inserts a new message; c) a rdma insertion counter to count the number of messages inserted by the sending host node; d) a receiving node counter structure element, responsive to the receiving host node, configured to enable said receiving host node issue one RDMA work request for acknowledging at least one message from the batch; e) last sent message node pointing element of the common queue data to point to the node counter structure of the last message sent from a remote sender process to the receiving host node; and f) last receive message node pointing element of the common queue data to point to the message last received by the receiving host node. In one of the other embodiments of the present invention, a method for lockless remote messaging in an inter process communication between at least two processes running on at least one sending and one receiving host node implemented by RDMA supported Network Interface Card configured to synchronize a queue of messages via a memory mapped file hosted on each of the said node is provided, the said method comprising: a) initializing transfer of a message from the sending host node to the corresponding memory mapped file whenever an indication of the message to be read from the message buffer by the receiving host node is received, and accordingly updating a rdma free pointing element and a rdma insertion counter to indicate the sending host node for transferring the next message to the message buffer ; b) performing constitution of at least one batch of message in a remote sender process, wherein the batch constitution is based upon a coordination between an operational status of the sending host node and a bulking variable to determine the number of messages within the batch, wherein a determination for inclusion of a next message in the constituted batch is further dependent upon a predetermined aclat parameter; c) updating a batch size of the constituted batch, a node counter structure of a previous node to detect arrival of any new message and a last sent message pointing element to point to the last message in the message batch to be read by the receiving host; 4 d) issuing a RDMA work request for transmitting a contiguous message buffer section associated therewith the batch of message; and e) initializing transfer of the message batch from a memory mapped file to the corresponding receiving host node and updating a last received message pointing element along with a data pointing element to indicate the arrival of the message batch to be read by the receiving host. BRIEF DESCRIPTION OF THE DRAWINGS The foregoing summary, as well as the following detailed description of preferred embodiments, are better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings example constructions of the invention; however, the invention is not limited to the specific methods and system disclosed. In the drawings: Figure 1 illustrates a typical layout of Memory Mapped File as known in the prior art. Figure 2 shows circular queue of messages as represented in a Memory Mapped File Layout. Figure 3 illustrates a system for inter process communication between two server hosts with memory mapped files synchronized using RDMA. Figure 4 shows a design layout of Memory Mapped File in accordance with the preferred embodiment of the present invention. Figure 5 shows the implementation set up of the system in accordance with one of the preferred embodiments of the present invention. DETAILED DESCRIPTION OF THE INVENTION Some embodiments of this invention, illustrating all its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods 5 similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and methods are now described. The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. Definitions: Throughput: The numbers of messages read or number of write operations that can be performed from the queue per second are called the Throughput. Latency - The time elapsed between the sending of a message by the sender process and receiving of that message by the receiver process is the latency experienced by that message. RDMA write operation: RDMA write operation, interchangeably called as RDMA write work request is a command issued to the RDMA capable NIC. It is a user level call which notifies the local NIC about where the data is placed in RDMA registered memory (RAM) and its length. The NIC then (asynchronously) fetches the data and transfers it across the network using the relevant (IWARP) RDMA protocol. In effect, it writes the specified data from the local memory location to the memory location on the remote host. The NIC on the remote host responds to the iWARP messages by placing the data in its RDMA registered memory in its host, thus carrying out an RDMA write work request. The RDMA write operations are accessed through infiniband verbs library. Memory registration: These are API's provided by RDMA to make a local memory region available to remote hosts. This is essential for the use of RDMA write operations. RDMA technology allows applications to access memory on remote hosts as if it was available on the same host where the application runs. RDMA was first introduced in Infiniband networks whereby the native infiniband protocol was used and later supported on Ethernet networks using WARP. In both networks, the network interface cards (NIC) are capable of executing RDMA write commands which cause the placement of data in the memory area of the remote host. Though for the purposes of this invention Ethernet network has been referred in the latter document for illustrative purposes, the invention is not limited to Ethernet networks and can be implemented on Infiniband network using native infiniband protocol. A RDMA interface allows an application to read from and/or write into memory (RAM) locations of a remote host. This is very much unlike sending and receiving messages. It gives the application the illusion of a shared memory between the sender and receiver processes even though they run on separate hosts. The device drivers of NICs supporting RDMA operation provide a direct interface to application programs to send data bypassing the operating system. The costly overhead of switching from user mode to system mode is avoided thereby allowing the application to be executed by the CPU more efficiently. Further, the RDMA NIC 6 implements the complex networking tasks required to transfer the message from the local host to the remote host without any CPU intervention, make the CPU more available for applications programs. The other advantageous feature of the present invention is eliminating the need of performing an additional copy operation as required in a typical system call assisted communication. For an RDMA write operation the NIC does a Direct Memory Access transfer with the source data as the registered memory region which the application running in user mode can directly write the message into, thereby obviating the need for that additional copy. Also, as the RDMA write operation takes up the entire responsibility of making the data available in the registered memory region of the remote host where the receiver process runs, there is no need for a separate receiver process to manage the receiving of messages from the network which effectively contributes to the reduction of one latency introducing component in the system. As shown in Figure 1 & 2, a typical layout of memory mapped file contains a static circular queue of messages. Each message structure in the file has a header section 101 and a payload section 102. The payload section 102 contains the raw message as passed by the application. It is also referred to as message buffer. The header section 101 contains the pointer to the header of the next message thus creating a circular queue of messages. The initial part of the file contains data specific to the queue. Some of the important variables in this section are: - data_anchor 103 - points to the next message to be read by the receiver. - freenode 104 - points to the message that will be written to by the sender. - numberofinserts 105 - number of messages sent by the sender (since queue creation) numberofdeletes 106- number of messages read by the receiver (since queue creation) Referring to Figure 2, additional free node pointer called rdmafreenode 201 and a new counter variable called rdmainserts 202 have been introduced which splits the typical message structure into two. The messages pointed by free-node 104 up to the messages pointed by rdma_free_node 201 represents the messages that have been queued by the sender process, but are yet to be transferred to the remote host (server B) via RDMA. The messages from free node to dataanchor 103 are already in transit to (or have reached) the remote host, waiting to be acknowledged by the receiver process (on server B) through the update of the data-anchor pointer. Now, referring to Figure 3 a system for inter process communication between two server hosts with memory mapped files synchronized using RDMA is presented. The system 300, according to one of the preferred embodiments of the present invention comprises of: - Physical Server 301 - This is the host on which the sender application process runs. - Physical Server 302 - This is the host on which the receiver application runs. 7 - RDMA capable network interface card NIC on server 301 - This NIC is capable of executing RDMA commands from the local or remote host. - RDMA capable network interface card on server 302 - This NIC is capable of executing RDMA commands from the local or remote host. - Messaging Library- This library contains the message send and receives functions which are linked in & invoked by the sender and receiver application processes. - Memory mapped file 303 on server 301 - This memory mapped file contains the FIFO queue which is used for sending and receiving messages. It is synchronized with server 302. - Memory mapped file 304 on server 302 - This memory mapped file contains the FIFO queue which is used for sending and receiving messages. It is synchronized with server 301. - Remote Sender process 305 running on server 301. This is the component which is responsible for batching incoming messages via RDMA. It will group all messages from freenode 104 to rdmafreenode 201 and issue RDMA work requests for the entire group of messages. - Ethernet or Infiniband switch (optional) - The switch that connects servers 301 and 302. Referring next to Figure 4, a design layout of Memory Mapped File is shown. The figure represents a separate section for buffer headers which point to the payload areas of the buffers. The headers are contiguously allocated in one memory area and so are the payload sections. There is also an area where the data common to the queue is stored. Data anchor 103, free node pointer 104, numberofinserts 105, numberofdeletes 106 are all example of such common queue data 401. Within the common queue data area 401, a structure which groups the free node pointer 104 and the data anchor pointer 103 is also presented. Further, the common queue data of the memory mapped file contains two variables freenode 104 and numberofinserts 105 that have been grouped together in a single structure to eliminate the latency reducing component. This helps send the entire structure in 1 RDMA write work request instead of sending them in separate work requests. This structure will now be known as nodecounter structure 402. In every update issued by the remote sender process RS 305 there are two work requests. One work request points to the payload region in the area. The other work request points to the nodecounter structure. These work requests cannot be combined because one work request can point only to a contiguous region of memory. To reduce the two work requests required per update to one request it is necessary to combine the two sets of data in a different way. Figure 4 depicts an optimized memory layout wherein the nodecounter structure 402 has been repeated at the end of the payload section of every message. Thus it is now possible to combine the message payload plus the nodecounter structure into one work request as they are both in contiguous memory region. The newly added variables and new meanings of modified variables in the sending side are as follows: 8 - rdmafree node - points to the message buffer in which the sender will insert the next new message. - rdmainserts - number of messages inserted by the sender process since the queue was created - nodecounter.freenode - points to the next message starting which the remote sender process will start batching messages in order to send the messages as part of one RDMA write work request. - nodecounter.numberofinserts - The number of messages that have been updated to the remote host (via RDMA write work requests) ever since the creation of the queue. Also, the dataanchor and the numberofdeletes can be grouped together in a single structure. This helps the receiver process to send the entire structure in 1 RDMA write work request instead of sending them in separate work requests. This structure will be known as receivernodecounter structure. For the receiver process, the receivernodecounter.dataanchor functions same as the dataanchor and the receivernodecounter.numberdeletes functions same as the numberofdeletes. The optimization achieved by introducing newly added variables for establishing low latency high message throughput is presented below wherein the number of RDMA work request have been reduced to one by using the remote sender process 305. The modified algorithm of the sender process is as follows: Loop a. If the next update of rmdafreenode equals dataanchor keep checking else continue to next step b. Copy the message from user buffer to the local memory mapped file c. Update the rdmafreenode to point to the next data buffer d. Increment rdmainserts counter This time the sender process does not issue any RDMA work requests as this work now will be done by the remote sender process (RS). The algorithm for the remote sender process is as follows: Thus the new optimized approach for the remote sender is as follows: 1. Registering the local memory mapped file with remote host 302 and perform the following operations: a. If the rdmafreenode equals free-node and rdma_inserts equals numberofinserts keep checking else proceed to next step b. Nodevar = freenode c. Prevnode = NULL d. Initialize messagegroup to null e. Initialize groupsize to 0 f. While nodevar does not equal rdmafree_node 9 i. Add the message pointed to by node into the messagegroup ii. Increment groupsize iii. Prevnode = nodevar iv. Nodevar = next node g. Add group-size to numberofinserts into the nodecounter structure of nodes pointed by last message_nodecounterpointer and prevnode h. Update freenode (in the nodecounter structure of nodes pointed by last message_node_counter pointer and prevnode) to point the message buffer next to the last message in messagegroup i. last message_node_counter pointer = prevnode j. Check status of previous RDMA work requests and clear them if any of them completed k. Issue 1 RDMA work request for the payload section of messages in messagegroup. Wherein the variables "lastsentmessage node counterpointer" of the first node in queue is introduced in the common queue data area. This variable points to the nodecounter structure of the last message which was last sent to the remote server B. For the case of illustration it will point to the nodecounter belonging message node A in the above figure. This is done during queue creation and similar is the case for dataanchor, rdmafreenode and nodecounter.freenode, where they are made to point to message node A during queue creation as it was for the previous implementations. The counters nodecounter.noofinserts in all the message nodes and noofdeletes in the common data area are initialized to zero during queue creation as it was for the previous implementations. This initialization has not been explicitly mentioned before. It is being mentioned now for convenience. The variable "lastreceivedmessagenode_counterpointer" of the first node in queue is introduced in the common queue data area. This variable point to the nodecounter structure of the last message which was last received the remote server A. 10 The optimized algorithm of the receiver process is as follows: 1. Register the local memory mapped file with remote host (server A) and perform the following operations: a. Check status of previous RDMA work requests and clear them if any of them completed b. If last received messagenode counterpointer.free node equals dataanchor keep checking else continue c. Copy message from local memory mapped file to user buffer d. lastreceivedmessage-node_counter pointer = dataanchor e. Update the receivernodecounter.dataanchor pointer to the next data buffer f. Increment receiver node counter.number of deletes counter g. Issue 1 RDMA work request for updating the receivernodecounter structure The above adopted approach reduces the number of work requests in the remote sender process 305 to 1 by grouping the variables numberofinserts and freenode pointer into a single structure and placing the node counter structure intermittently after each payload section. The deciding factor of performance for a system could be the maximum number of work requests that can be executed per sec by the NIC supporting RDMA. With this in mind it should be ensured that the number of work requests per update is optimized. By batching the messages and grouping variables as in the previous optimization the number of work requests that go into one update has been reduced. EXAMPLE OF WORKING OF THE INVENTION The invention is described in the example given below which is provided only to illustrate the invention and therefore should not be construed to limit the scope of the invention. Referring to Figure 4, it is assumed that the remote sender process 305 has batched 3 messages C, D and E and wants to update the remote host 302 end using RDMA. The memory region to be updated in the batch is marked in the referred figure. Note that this memory region will include the nodecounter structure for messages B, C, D and E. Also important to note is that the only node-counter structures that need to be updated are the ones attached to payloads of messages B and E. The reasoning for this is as follows: - Nodecounter structure attached to B. Prior to message C, D and E, the last message sent from the remote sender to the receiver was B. The nodecounter structure of B was also updated as part of that last message. So the receiver will be checking the freenode pointer in the nodecounter structure attached to B to determine if any new message has come. - Node counter structure attached to E. Once the batch is updated to the remote and the messages C, D and E are read by the receiver, the nodecounter structure attached to the payload of E is checked by the receiver 11 to know that there are no further messages. Only the next batch update from the remote sender will update this nodecounter structure in C to indicate that there are more messages inserted in the queue. Optimization by increasing the number of messages being grouped by the Remote Sender Process Next, explained is the optimization level achieved upon increasing the number of messages being grouped by the remote sender process 305. In all the above discussed optimization approaches, it is witnessed that the number of messages being grouped are not significant. In fact the average size of group of messages sent by the remote sender process was less than 2.It is therefore understood that a need for adding more messages in the group exists to get efficient message transmissions. If the remote sender process 305 waited for more messages for an indefinite time then it would add to the latency of messages. So there has to be an upper bound on how much messages could be grouped together. This upper bound is referred to as up-limit for further reference for the purposes of this invention. However, the remote sender process 305 need not have to wait for the entire up-limit number of messages to make a group of messages to send. It shall also be understood that there is no guarantee on when the messages arrive. So in addition to this up-limit there can be some more indicators to decide whether to continue grouping messages (herein after called as "bulking") or not. Consider a situation when the sender process is queuing a message. In such a case, it is a good enough indication for the remote sender process 305 to wait for the next message to be bulked. On the contrary, in a situation where the sender process is not queuing a message, there is very little reason for the remote sender process 305 to wait for adding another message to the group. However if the application is willing to tolerate a slight amount of latency, (hereon called as aclat) then even if the sender is not currently queuing a message, the remote sender process can wait for aclat nanoseconds to wait for the next message from the sender process to be added to the group. To implement this idea, the sender process keeps a variable indication, called as writeon, to check if it is currently queuing a message in the queue. It is declared as volatile. Also to be implemented is a user configurable aclat parameter which will tell the remote sender process 305 how long to wait for the next message for grouping in case the sender process is not currently sending a message. In the above discussed scenario, the modified sender process is detailed below: a) Write on=1 b) If the next update of rdmafreenode equals data_anchor, keep checking else continue to next step c) Copy the message from user buffer to the local memory mapped file d) Update the rdmafreenode to point to the next data buffer e) Increment datainserts counter f) Writeon=0 12 The remote sender process 305 is also modified with changing scenario. However, few new variables are added to achieve the said optimization like: a) Buffer variable: This variable is introduced to control the grouping of messages and indicate when the grouping (bulking) can be stopped. b) nc- This is a temporary variable used to control the wait for grouping messages in case the sender process is not currently sending a process. The modified remote sender process 305 in view of the newly added variable is as follows: 1) Register the local memory mapped file with remote host (Server B) and perform the following operations: a. If the rdmafreenode equals freenode and rdmainserts equals the numberofinserts, keep checking else proceed to next step b. Node var= freenode c. Prevnode= NULL d. Initialize message group to null e. Initialize groupsize to 0 f. Buffer variable available=1 g. While Buffer variable available i. If node var equals rdmafreenode 1. If writeon==0 exit innermost loop 2. Wait till node var equals rdmafreenode AND writeon==1 3. If nodevar still equals rdmafreenode a. nc=0; b. init time= timestamp c. while nc=0 1. curr time= time stamp II. difftime= curr time- init time 1i1. if nodevar does not equal rdmafreenode, nc=1 else IV. if difftime > aclat, nc=2 d) if nc==2 exit innermost loop ii) Add the message pointed to by node into the messagegroup iii) Increment groupsize iv) Prevnode= nodevar v) Node var= next node vi) If groupsize> up-limit, buffer variable=0 13 h) Add groupsize to numberofinserts into the nodecounter structure of nodes pointed by last message_node_counter pointer and prevnode i) Update freenode (in the nodecounter structure of nodes pointed by last message_node_counter pointer an dprev node) to point the message buffer next to the last message in messagegroup j) lastmessagenodecounter pointer= prevnode k) Check status of previous RDMA work requests and clear them if any of them is completed I) Issue 1 RDMA work request for the payload section of messages in the messagegroup In the above modified process, the remote sender process 305 first waits for at least one message to be inserted by the sender process, which is similar to what is done in previous optimization approaches. Here, the difference is in the looping when the first message is detected and the looping starts to group messages. The grouping loop, so modified functions as below: First it checks to see if new messages have arrived. If there is a new message then it proceeds to add new message to the group as before. Otherwise it checks if the sender is currently queuing a new message. This is facilitated by the writeon variable which is updated by the sender process. If the sender is indeed queuing a message then it waits for the new message to be inserted and goes on to add the new message in the group as before. If the sender is not adding a new message then it waits for a time specified by the administrator configured aclat parameter in a spin loop. Within this wait spin loop if a new message is inserted then it exits the spin loop and proceeds to adding the new message into the group as before. If the time period specified by aclat expires without a new message arriving, then the grouping of messages is stopped. Next level of optimization is achieved when the memory layout is changed to further reduce the work request. This is achieved by reducing the number of work requests to 1 for a batch of messages in the receiver process. In all the previous optimization approaches, the number of work requests was reduced in the remote sender process 305, whereas the receiver process is till issuing 1 work request per message received. So at this point the receiver is the bottleneck as it is issuing the maximum number of work requests per second. To improve performance it is clear that the receiver should issue lesser work requests. A slight consideration wil show that it is considerable if the receiver issues an acknowledgement work request for the last message in the currently received set of messages from the remote sender. However, one perceived disadvantage to this is that the acknowledgement will arrive at the sender a little later. This can be offset by the following considerations: When the work request is actually executed by the NIC the latest update to the receivernodecounter structure is sent to the sender host (server A) as opposed to when the work request was actually issued. 14 If the throughput improves due the reduction in work requests at the receiver process, again the acknowledgment will reach the sender host faster than perceived. So, a further modification to the receiver process for better optimization is as follows: 1) Register the local memory mapped file with remote host and perform the following operations a) Check status of previous RDMA work requests and clear them if any of them completed b) If lastreceived_messagenodecounter pointer.freenode equals dataanchor keep checking else continue c) Copy message from local memory mapped file to user buffer d) last receivedmessagenode counter_pointer=dataanchor e) Update the data anchor pointer to the next data buffer f) Increment numberofdeletes counter g) If the currently read message is the last message in the group of messages sent by the remote sender process a. Issue one RDMA work request for updating the receivernodecounter structure Further, following bulk messaging APIs are adapted for RDMA write work request in the releasereservereadbulk and releasereservewritebulk functions: - reserveread_bulk(&noof messages) - variable noofmessages updated to indicate the number of free buffers available for reading. - releasereserveread bulk(num) - mark the next "num" messages as read. - reservewritebulk(&noof messages) - variable noofmessages updated to indicate the number of free buffers available for writing. * releasereservewritebulk(num)- mark the next "num" messages as ready to be read. When executed on a separate infrastructure of the following specification with certain changes, a throughput of 5.5 million messages per second is achieved. Specification of infrastructure: * 2 nodes (server 1 and server 2) each having six core Intel X5675 running at 3.07GHz 0 12 MB shared cache * 24GB memory 0 Network being Infiniband with 40 Gbps bandwidth and capable of RDMA a Mellanox ConnectX@-2 40Gb/s InfiniBand mezzanine card 0 Mellanox M3601Q 36-Port 40Gb/s InfiniBand Switch The changes made to the above infrastructure being: a) Maintaining the maximum queue size to 1000 15 b) Keeping up-limit to 40% c) Setting aclat to 10 nano-seconds Referring to Figure 5, a Latency test is set-up for the infrastructure of a given (above) specification to validate the optimization levels achieved with the modified process flow as given below: So far the measurement results focused only on throughput test results where only messaging rate was a concern. Thus a new test was devised which measures latency as well as throughput test results. In this test the sender and receiver processes run on the same host (server 1). There is a loopback process that runs on the remote host (server 2) which simply receives the messages from sender process and sends it to the receiver process. The receiver process receives the messages and computes the latencies and throughput. For latency computation the sender process records the timestamp A into the message, just before sending. When this message reaches the receiver process it takes a timestamp B. The difference time B-A is used for computing latency and the average is calculated over several samples. The queue parameters configured for this test being: - Maintaining maximum queue size to 100 - Keeping the Up-limit at 40% - Setting aclat to 10 nano-seconds In this test the receiver process recorded a throughput of 3.25 messages per second with an average round trip latency of 34 microseconds. Thus, using the modified approach, just more than 1 million messages per second with a sub 100 micro second latency. The preceding description has been presented with reference to various embodiments of the invention. Persons skilled in the art and technology to which this invention pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope of this invention. 16
AU2011265444A 2011-06-15 2011-12-21 Low latency FIFO messaging system Active AU2011265444B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2016201513A AU2016201513B2 (en) 2011-06-15 2016-03-09 Low latency fifo messaging system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN1745MU2011 2011-06-15
IN1745/MUM/2011 2011-06-15

Related Child Applications (1)

Application Number Title Priority Date Filing Date
AU2016201513A Division AU2016201513B2 (en) 2011-06-15 2016-03-09 Low latency fifo messaging system

Publications (2)

Publication Number Publication Date
AU2011265444A1 AU2011265444A1 (en) 2013-01-10
AU2011265444B2 true AU2011265444B2 (en) 2015-12-10

Family

ID=47334167

Family Applications (2)

Application Number Title Priority Date Filing Date
AU2011265444A Active AU2011265444B2 (en) 2011-06-15 2011-12-21 Low latency FIFO messaging system
AU2016201513A Active AU2016201513B2 (en) 2011-06-15 2016-03-09 Low latency fifo messaging system

Family Applications After (1)

Application Number Title Priority Date Filing Date
AU2016201513A Active AU2016201513B2 (en) 2011-06-15 2016-03-09 Low latency fifo messaging system

Country Status (2)

Country Link
CN (1) CN102831018B (en)
AU (2) AU2011265444B2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424105B (en) * 2013-08-26 2017-08-25 华为技术有限公司 The read-write processing method and device of a kind of internal storage data
IN2013MU03527A (en) * 2013-11-08 2015-07-31 Tata Consultancy Services Ltd
IN2013MU03528A (en) * 2013-11-08 2015-07-31 Tata Consultancy Services Ltd
CN106462525A (en) * 2014-06-10 2017-02-22 慧与发展有限责任合伙企业 Replicating data using remote direct memory access (RDMA)
US10146721B2 (en) * 2016-02-24 2018-12-04 Mellanox Technologies, Ltd. Remote host management over a network
CN105786624B (en) * 2016-04-01 2019-06-25 浪潮电子信息产业股份有限公司 A kind of dispatching platform based on redis Yu RDMA technology
CN107819734A (en) * 2016-09-14 2018-03-20 上海福赛特机器人有限公司 The means of communication and communication system between a kind of program based on web socket
US10587535B2 (en) 2017-02-22 2020-03-10 Mellanox Technologies, Ltd. Adding a network port to a network interface card via NC-SI embedded CPU
CN109002381B (en) * 2018-06-29 2022-01-18 Oppo(重庆)智能科技有限公司 Process communication monitoring method, electronic device and computer readable storage medium
WO2023040683A1 (en) * 2021-09-17 2023-03-23 华为技术有限公司 Data transmission method and input/output device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8700724B2 (en) * 2002-08-19 2014-04-15 Broadcom Corporation System and method for transferring data over a remote direct memory access (RDMA) network
US7093147B2 (en) * 2003-04-25 2006-08-15 Hewlett-Packard Development Company, L.P. Dynamically selecting processor cores for overall power efficiency
US7475153B2 (en) * 2004-07-16 2009-01-06 International Business Machines Corporation Method for enabling communication between nodes
US20060075067A1 (en) * 2004-08-30 2006-04-06 International Business Machines Corporation Remote direct memory access with striping over an unreliable datagram transport
US7613813B2 (en) * 2004-09-10 2009-11-03 Cavium Networks, Inc. Method and apparatus for reducing host overhead in a socket server implementation
US7584327B2 (en) * 2005-12-30 2009-09-01 Intel Corporation Method and system for proximity caching in a multiple-core system
US7996583B2 (en) * 2006-08-31 2011-08-09 Cisco Technology, Inc. Multiple context single logic virtual host channel adapter supporting multiple transport protocols
US7949815B2 (en) * 2006-09-27 2011-05-24 Intel Corporation Virtual heterogeneous channel for message passing
US20090083392A1 (en) * 2007-09-25 2009-03-26 Sun Microsystems, Inc. Simple, efficient rdma mechanism
CN101577716B (en) * 2009-06-10 2012-05-23 中国科学院计算技术研究所 Distributed storage method and system based on InfiniBand network

Also Published As

Publication number Publication date
AU2016201513B2 (en) 2017-10-05
AU2016201513A1 (en) 2016-03-24
CN102831018B (en) 2015-06-24
AU2011265444A1 (en) 2013-01-10
CN102831018A (en) 2012-12-19

Similar Documents

Publication Publication Date Title
AU2016201513B2 (en) Low latency fifo messaging system
Su et al. Rfp: When rpc is faster than server-bypass with rdma
Chun et al. Virtual network transport protocols for Myrinet
KR100992282B1 (en) Apparatus and method for supporting connection establishment in an offload of network protocol processing
KR101006260B1 (en) Apparatus and method for supporting memory management in an offload of network protocol processing
US6070189A (en) Signaling communication events in a computer network
US9503383B2 (en) Flow control for reliable message passing
KR102011949B1 (en) System and method for providing and managing message queues for multinode applications in a middleware machine environment
EP2406723B1 (en) Scalable interface for connecting multiple computer systems which performs parallel mpi header matching
US9888048B1 (en) Supporting millions of parallel light weight data streams in a distributed system
US20190335010A1 (en) Systems and methods for providing messages to multiple subscribers
US6038604A (en) Method and apparatus for efficient communications using active messages
AU2014200239B2 (en) System and method for multiple sender support in low latency fifo messaging using rdma
EP2618257B1 (en) Scalable sockets
US10721302B2 (en) Network storage protocol and adaptive batching apparatuses, methods, and systems
US20140068165A1 (en) Splitting a real-time thread between the user and kernel space
CN111431757A (en) Virtual network flow acquisition method and device
Behrens et al. RDMC: A reliable RDMA multicast for large objects
US7788437B2 (en) Computer system with network interface retransmit
US6012121A (en) Apparatus for flexible control of interrupts in multiprocessor systems
US6256660B1 (en) Method and program product for allowing application programs to avoid unnecessary packet arrival interrupts
González-Férez et al. Tyche: An efficient Ethernet-based protocol for converged networked storage
CN116471242A (en) RDMA-based transmitting end, RDMA-based receiving end, data transmission system and data transmission method
JP3628514B2 (en) Data transmission / reception method between computers
EP2115619B1 (en) Communication socket state monitoring device and methods thereof

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)