GB2375408A - Data transmission via a network, using shared memory - Google Patents

Data transmission via a network, using shared memory Download PDF

Info

Publication number
GB2375408A
GB2375408A GB0111624A GB0111624A GB2375408A GB 2375408 A GB2375408 A GB 2375408A GB 0111624 A GB0111624 A GB 0111624A GB 0111624 A GB0111624 A GB 0111624A GB 2375408 A GB2375408 A GB 2375408A
Authority
GB
United Kingdom
Prior art keywords
application
data
network
receive
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0111624A
Other versions
GB0111624D0 (en
Inventor
Steven Leslie Pope
Derek Edward Roberts
David Riddoch
Kieran Mansley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Laboratories Cambridge Ltd
Original Assignee
AT&T Laboratories Cambridge Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Laboratories Cambridge Ltd filed Critical AT&T Laboratories Cambridge Ltd
Priority to GB0111624A priority Critical patent/GB2375408A/en
Publication of GB0111624D0 publication Critical patent/GB0111624D0/en
Priority to AU2002242863A priority patent/AU2002242863A1/en
Priority to PCT/GB2002/001455 priority patent/WO2002093395A2/en
Publication of GB2375408A publication Critical patent/GB2375408A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Abstract

A method is provided for transmitting data from a first endpoint application 31 in a computer 30 to a second endpoint application 34 in a remote computer 35 via a network 33. The second application 34 writes, in a receive queue 40 of shared memory 39 in the first application 31, descriptors indicating the availability of receive buffers 37 in the second application for receiving data. When the first application 31 has data 46 for transmission, the first application checks the receive queue 40 for descriptors indicating the availability of receive buffers 37. If receive buffers 37 are found to be available, the first application 31 transmits at least part of the data via the network 33.

Description

<Desc/Clms Page number 1>
DATA TRANSMISSION VIA A NETWORK The present invention relates to a method of asynchronous data transfer using network shared memory from a first end point application to a second end point application via a network. The present invention also relates to computer software and a communication system for performing such a method, an endpoint application for transmitting data according to such a method, a computer programmed to run such an application, a program for controlling a computer to run such an application, a network software library containing such a program, and a medium containing such a program or library.
Collapsed Local Area Network (CLAN), for example as disclosed in"Tripwire : A Synchronisation Primitive for Virtual Memory Mapped Communication", D. Riddoch, S. Pope, D. Roberts et al, 4th International Conference on Algorithms and Architecture for Parallel Processing 2000 (ICA3 PP 2000), the contents of which are incorporated herein by reference, is a known type of user-level network in which each computer forming part of the network contains a network interface controller (NIC) which is capable of delivering data directly into the buffers of an application running on another computer of the network. In order to achieve this, the computers provide non-coherent shared memory so that each application can map in a portion of the address space of another application. Data can be transferred across the network by means of processor read or write instructions and this is known as"programmed input output" (PIO). For example, if a block of data bytes is to be transferred, arguments (comprising a pointer to a local source memory containing the data, a pointer to a remote network mapped memory for receiving the data and the length of the data) are passed to a memory copy function contained in a library to which the application has access. The data transfer is then performed as illustrated in Figure 1 of the accompanying drawings.
The block of data 1 is contained in a memory 2 within a computer 3 running the application which wishes to transmit the data. The memory copy function 4 is performed by the central processing unit (CPU) and cache of the computer 3, which retrieves the block 1 of data and supplies it to a network interface controller (NIC) 5.
The NIC 5 transmits the data via the CLAN 6 to an NIC 7 of a remote computer 8. The
<Desc/Clms Page number 2>
block of data is transmitted as a packet or packets containing the destination address.
The NIC 7 receives the or each packet and writes the data directly to the buffer 9 in a memory 10 of the computer 8. The NIC 7 may"stream"the data by starting to transmit packets before it has received the whole of the block 1.
Another form of data transfer is known as"direct memory access (DMA) transfer"and may also be performed by such a CLAN. In such a DMA transfer, data are transferred by means of the transmitting application constructing a request for the NIC to transfer the data.
For example, in the network shown in Figure 1, the transmitting NIC 5 has a set of registers forming a DMA queue for handling such data transfers. The application running on the computer 3 constructs a transfer request in local memory, again comprising a pointer to the local source memory containing the block of data for transfer, a pointer to the remote network mapped memory for receiving the data block and the length of the data block. The application writes the address pointer into the DMA queue of the NIC 5. The NIC 5 processes the requests in the DMA queue one at a time and transmits data via the network 6 as each request is processed. As alternatives to a DMA queue in the NIC, the NIC 5 may have a register which points to a queue in memory and another register forming a read pointer into the queue or a first-in/first-out (FIFO) buffer for requests.
Although these data transfers have been described with reference to a memory mapped network, this is not necessary. For example, a destination virtual circuit identifier could be specified for the destination address. Alternatively, in the case where the destination information is contained in the data being transferred such as an internet protocol (IP) packet, it may not be necessary to specify any destination information.
Whereas a PIO transfer is a"push"of data from the CPU, a DMA transfer is a"pull" from the NIC. Because of this, completion of a DMA transfer is not known by the CPU until some indication is received from the NIC. In the case of a DMA queue, completion may be indicated by the NIC updating a pointer value.
<Desc/Clms Page number 3>
It is typical for a computer such as 3 to run several applications and a kernel which is common to all of the applications, which provides shared resources for the applications, and which schedules which application is to be run at any time. Because each application can be scheduled and descheduled at any time, it is not possible for the applications to share a common DMA queue for two reasons. Firstly, a DMA transfer is not"atomic", where"atomic"means that it will either happen correctly or it will not happen at all. In particular, such a transfer requires two operations and an application requesting such a transfer could be interrupted after only one operation had been performed. Also, two applications could change the same item affecting such a transfer so as to invalidate the transfer. A kernel can be arranged to disable interrupts so as to guarantee atomicity of such a transfer operation. Secondly, it would be possible for an application to fill a common DMA queue so that no other application would then have access to this resource for an unacceptable period of time. One of the functions of a kernel is to ensure that resources are managed fairly between the applications. Thus, traditional network stacks must reside in the kernel.
A known user level network standard is referred to as Virtual Interface Architecture
(VIA), for example as disclosed in : IEEE Computer, Vol. 31, No. 11, November 1998 pp 61-66,"Evolution of the Virtual Interface Architecture", Thorsten von Eicken and Werner Vogels; Intel Corporation,"Virtual Interface (VI) Architecture: Defining the Path to Low Cost High Performance Scalable Clusters" (1997); and Intel Corporation, "Intel Virtual Interface (VI) Architecture Developer's Guide", revision 1.0 (1998), the contents of which are incorporated herein by reference. The following is a simplified description of a basic mode of operation under VIA.
The NIC implements a DMA queue for each application in order to avoid the need to have the DMA queues in the kernel. Thus, as shown in Figure 2 of the accompany drawings, N applications such as 20 are run in accordance with a schedule on a computer 21 containing an NIC 22. In order to transmit data across a network 23, an application containing the block of data 24 to be transferred forms a request 25, for example in the form of a descriptor specifying the source and destination locations and
<Desc/Clms Page number 4>
the data length, adds this to its own virtual DMA queue 26 and increments a pointer B.
If the NIC is not currently servicing the queue, the application 20 pokes a"doorbell"27 corresponding to the application 20 in the NIC 22. The NIC 22 contains a DMA queue 28 corresponding to the application 20. Each doorbell 27 and DMA queue 28 may be unique to the corresponding application 20 as shown or may be implemented as a single queue.
The NIC 22 processes the data transfer requests in the various queues 28 corresponding to the different applications 20. Each time a data transmission request is fulfilled and the data are transmitted to the network 23, the NIC increments a pointer such as A to inform the application 20 of the successful transmission of the data. As an alternative to such a pointer update, the NIC 22 may form a"completion queue"for each application in order to specify more information about the data transmission. It is also possible for the individual completion queues to be consolidated as a single completion queue shared by all of the applications.
The data reception process at the receiving application is essentially the reverse of the transmitting side. The application adds descriptors for free buffers in which the data may be received in a buffer queue. Each applications has its own buffer queue in the NIC of the receiving computer. A buffer queue may become empty such that there is no buffer available to receive the data. If the NIC"believes"that the connection is not reliable, it will maintain the connection but may delete packets intended for that application. Alternatively, if the NIC believes that the connection is reliable but data are not being transferred, it may close the connection.
In the case of the VIA standard, it is the responsibility of a higher level protocol of the application to ensure that the buffer queue contains sufficient buffers for any incoming data. In order to avoid the problems described hereinbefore, the buffer queue is located in the user memory.
<Desc/Clms Page number 5>
A disadvantage of such an arrangement is that there is no application-to-application level of flow control of data transfer. For example, an NIC will transmit data whenever there is an entry in any of the DMA queues irrespective of whether the receiving application has placed buffer descriptors in its receive queue in the NIC at the receiving computer. If the appropriate descriptors are not present in the application receive queue, data may be dropped by the receiving NIC or the VIA connection may be broken as described above. A receiving computer will typically run several applications and, because such applications are scheduled and do not run continuously, the individual receive queues in the NIC may not be examined or serviced for substantial periods of time. This differs from other known techniques as described hereinbefore where a kernel posts descriptors to a single receive queue and manages all applications run by the computer because, in a kernel stack, buffers can be posted to any application and, unless overloaded, the kernel will service the queue relatively quickly.
Thus, unless additional high level software is provided, VIA is unable to stream data between applications. Although proposals, such as Infiniband (presently disclosed at http : //www. infinibandta. orge, exist for dealing with this problem, such proposals provide a hardware flow control mechanism for preventing a transmitting NIC from sending data if no buffers are available in the receive queue. This substantially increases the complexity of the hardware.
With VIA at present, if the above-described problems are to be avoided, a receiving NIC must communicate details of the receive buffer descriptors in its receive queue to the transmitting NIC. VIA provides for this using software, for example by posting VIA requests. Given that data are transmitted in blocks of predetermined unit size for each connection, even with software or hardware flow control, a receiving application must reserve a number of buffers in order to obtain good data transfer performance. For example, good performance may generally be obtained by reserving eight buffers.
However, if the unit transfer size is 32 kbytes, then 256 kbytes must be allocated for each application at the receiving computer. In order to support a substantial number of applications, a relatively large amount of memory must be provided and this therefore limits the scaleability of the system for practical and acceptable implementations. Also,
<Desc/Clms Page number 6>
there is considerable overhead in managing VIA descriptor queues and this also limits the scaleability of the hardware.
A further limitation of the known system is that all data transfer is performed asyncronously by means of DMA"pull"operations from the NIC. This is inefficient for relatively small messages.
According to a first aspect of the invention, there is provided a method of transmitting data transfer using network shared memory from a first end point application to a second end point application via a network by which network the first and second applications are separated, in which method: the second application writes across the network, in the network shared memory of the first application, a first descriptor indicating the final endpoint destination address of a receive buffer in the second application available for receiving data; and when the first application has data for transmission to the second application, the first application checks the shared memory for the first descriptor and, if present, transmits at least part of the data via the network to the final endpoint destination address in the second application.
The network may be a memory-mapped network.
Each of the first and second applications may be a user-level application. The shared memory may be user-level memory.
The first descriptor may comprise the size of the receive buffer. The first and second applications may run on different computers.
The shared memory may contain a receive queue for a plurality of first descriptors received from the second application, each of the first descriptors indicating the final
<Desc/Clms Page number 7>
endpoint destination address of a respective receive buffer in the second application available for receiving data.
The first application may store a read pointer for the receive queue. The shared memory at the second application may contain a copy of the receive queue read pointer.
The shared memory at the first application may store a write pointer for the receive queue. The second application may contain a copy of the receive queue write pointer.
The first application may compare the receive queue read and write pointers to check for the presence of the first descriptors.
If the first application determines that there are no or insufficient first descriptors in the receive queue, the first application may generate and store a first code representing the arrival of a predetermined first descriptor indicating the presence of sufficient receive buffer space in the second application for a data transfer from the first application, a first comparator may compare the first code with addresses in data supplied to the shared memory of the first application to detect the arrival of the predetermined first descriptor, and, in response to detection of the arrival of the predetermined first descriptor, the application may institute the data transfer. The first code may represent the arrival of a predetermined receive queue write pointer update from the second application.
The first application may contain a transmit queue for a plurality of second descriptors, each of which indicates the source endpoint address a respective transmit buffer in the first application containing data for transmission to the second application. Each of the second descriptors may comprise the size of the data contained in the corresponding transmit buffer The shared memory at the first application may store a read pointer for the transmit queue.
<Desc/Clms Page number 8>
The first application may store a write pointer for the transmit queue.
A network interface controller for the first application may compare the transmit queue read and write pointers to check for the presence of data for transfer to the second application.
If the transmit queue becomes full, the first application may generate and store a second code representing the transfer of data from the first application, a second comparator may compare second code with addresses in data supplied to the first application to detect a predetermined data transfer from the first application, and, in response to detection of the data transfer, to permit entry of at least one further second descriptor in the transmit queue. The second code may represent a predetermined transmit queue read pointer update.
The first application may transmit data as packets, each of which contains sufficient final destination address information to identify the destination receive buffer in the second application.
The first application may compare the size of the data for transmission with the size of the receive buffer and, if the receive buffer is too small for the data, may send a part of the data which is less than or equal to the size of the receive buffer.
The first application may select between programmed input output and direct memory access for each data transmission.
According to a second aspect of the invention, there is provided computer software for performing a method according to the first aspect of the invention.
According to a third aspect of the invention, there is provided a communication system arranged to perform a method according to the first aspect of the invention.
<Desc/Clms Page number 9>
According to a fourth aspect of the invention, there is provided a first end point application having network shared memory and arranged: to receive in the shared memory a descriptor from a second end point application indicating the final endpoint destination address of a receive buffer in the second application available for receiving data; and when the first application has data for transmission to the second application, to check the shared memory for the descriptor and, if present, to transmit at least part of the data to the final endpoint destination address via a network.
According to a fifth aspect of the invention, there is provided a computer programmed to run an application according to the fourth aspect of the invention.
According to a sixth aspect of the invention, the is provided a program for controlling a computer to run an application according to the fourth aspect of the invention.
According to a seventh aspect of the invention, there is provided a network software library containing a program according to the sixth aspect of the invention.
According to an eighth aspect of the invention, there is provided a medium containing a program according to the sixth aspect of the invention.
The invention will be further described, by way of example, with reference to the accompanying drawings, in which: Figure 1 is a block schematic diagram illustrating a known type of PIO transfer; Figure 2 is a block schematic diagram illustrating a known arrangement for transmitting data from several applications running on a computer to a network; and
<Desc/Clms Page number 10>
Figure 3 is a block schematic diagram illustrating an application and a communication system constituting embodiments of the invention.
Figure 3 illustrates a computer 30 running a plurality of scheduled applications 31, only one of which is shown. The applications communicate with an NIC 32 which sends data to and receives data from a CLAN 33 in the form of data packets, each of which contains sufficient address information to identify the final destination address to which the data are to be delivered. In the example illustrated, the application 31 wishes to transmit data via the CLAN 33 to a receiving application 34 running on a remote computer 35 provided with an NIC 36. Both applications 31 and 34 are endpoint applications and the data transmitted from the application 31 via the CLAN 33 to the application 34 identify the receive buffers 37 representing the final destination of the data packets. The receive buffers 37 are located in the application in local network shared memory 38.
The application 31 has local memory which, among other things, contains a receive queue 40, a read pointer 41 and a write pointer 42 for the receive queue 40, a transmit queue 43 and a read pointer 44 and a write pointer 45 for the transmit queue 43. The receive queue 40 and the write pointer 42 are in local network shared memory 39 in the application 31 whereas the read pointer 41 need not be in the shared memory and is illustrated as being outside the shared memory in Figure 3. The transmit queue 43, the read pointer 44 and the write pointer 45 are in another shared memory 47 which is shared with the NIC 32. The application 34 stores copies 41'and 42'of the read and write pointers 41 and 42, respectively. The pointer 41'is required to be in the shared memory 38 whereas the pointer 42'does not have to be and is illustrated as being outside the shared memory in Figure 3. The memories 38 and 39 are shared between the applications 31 and 34.
The application 31 allocates data buffers such as 46 (in the shared memory 47) for blocks of data which are to be sent to the receiving application 34. When the application 31 wishes to transfer data to the application 34, the data are placed in the buffer 46 and a descriptor is placed in the transmit queue 43 in the next available
<Desc/Clms Page number 11>
location as defined by the write pointer 45. The descriptor comprises, for example, a pointer to the local source memory (the buffer 46), a pointer to the application 34, and an indication of the size or length of the block of data to be transmitted. The write pointer 45 is then incremented to point to the next available location in the transmit queue 43 and a doorbell 49 in the NIC 32 is poked if the NIC 32 is not currently servicing the transmit queue 43.
As each receive buffer 37 (or group thereof) becomes available for receipt of data in the shared memory 38 of the application 34, the application 34 checks the local copies 41' and 42'of the receive queue pointers 41 and 42 to ascertain whether space is available in the receive queue 40 and, if so, sends a descriptor of the buffer or group (via the NIC 36, the CLAN 33 and the NIC 32) to the receive queue 40 at the current location indicated by the write pointer 42. The write pointer 42 and the copy 42'are then incremented to point to the next available location in the receive queue 40.
The application 31 makes use of a software VIA library 51 which is available to all of the applications running on the computer 30 to determine whether the transmit queue 43 contains a descriptor of data to be sent and whether the receive queue 40 contains a descriptor of a receive buffer in the application 34 for receiving the data. If so, the NIC 32 is requested to perform a direct memory access transfer of data from the appropriate data buffer 46 via the CLAN 33 and the NIC 36 to the receive buffer of the application 34 and updates the read pointer 44 to indicate that the data block has been transmitted.
The read pointer 41 is updated after transmission by DMA to indicate that the receive buffer is no longer available and the read pointer 41'is updated to indicate to the application 34 that the receive buffer 37 contains valid data.
It may happen that the transmit queue 43 becomes full and is unable to accept any further requests for transmission of data by the application 31. Thus, any further data to be transmitted to the application 34 cannot be put in the transmit queue and the application 31 may become blocked. It may therefore be preferable for the application to be descheduled or to deal with some other process until the transmit queue 43 is able to receive further entries. In such a case, the application 31 can set a tripwire 48 of the
<Desc/Clms Page number 12>
type disclosed in PCT Patent Application No. GB 00/01691 Serial No. WO 00/67131, the contents of which are incorporated herein by reference.
The tripwire 48 monitors the data flow from the NIC 32 to the application 31. The application 31 sets the tripwire 48 so as to respond to updating of the read pointer 44 to indicate that the NIC has dealt with the transfer of at least some of the data corresponding to the transmission queue 43. For example, the application 31 may set the tripwire 48 to respond when a certain number of entries in the transmission queue 43 have been dealt with by the NIC 32. When the tripwire detects this event, it triggers a resulting action as determined by the tripwire setting made by the application 31. The resulting action can take various forms depending on the requirements and these are disclosed in the PCT patent application mentioned above. For example, if the application 31 has become descheduled, it may be rescheduled or an interrupt may be generated causing the application to be woken up. Alternatively, flags may be set or an event raised to indicate that the transmit queue 43 has the capacity to handle further data transmission. The flags are periodically polled by the application 31 and, when set, further entries may be made in the transmit queue 43.
It may happen that there are no available receiver buffers 37 in the receiving application 34. This can be detected in the application 31 by comparing the read and write pointers 41 and 42. Again, this may block further processing by the application 31, which may again become descheduled or may deal with other tasks, such as servicing other endpoints represented by other blocks of shared memory. In such a situation, the application 31 can set another tripwire 50 to respond to receive buffers 37 becoming available. The tripwire 50 again monitors the flow of data from the NIC to the application 31 and may be set to respond to updating of the write pointer 42 to indicate the presence of available receiver buffers 37. When the tripwire 50 detects such an event, it triggers a resulting action which may, for example, be of any of the types described above with reference to the tripwire 48 and may result in transmission of data to the application 34 being resumed.
<Desc/Clms Page number 13>
It is thus possible to provide flow control within the VIA standard and other user-level networks without requiring any additional hardware assistance. The data transmission performance across the network may therefore be substantially improved. In particular, the sending application does not request a transfer of data until the receiving application has indicated that receive buffers are available to receive the data. The receiving application 34 or a tripwire informs the sending application 31 when buffer space becomes available and the sending application 31 sends data only when sufficient receive buffers are available to receive it. Thus, data are not dumped by the NIC 36 and the VIA connection remains open.
If the buffer space available in the application 34 is less than the size of a block of data to be transmitted, the application 31 may wait until sufficient receive buffer space is available. However, the software VIA library 51 may be arranged to divide the data block into partial blocks to match the available receive buffer space in the application 34. Good performance can therefore be achieved by allocating in the receiving application 34 smaller buffer space than would be required to receive the whole data block in one DMA operation from the application 31. There is therefore no longer any requirement for the receiving application 34 to make available substantial numbers of large buffers to cope with an unpredictable flow of data. Data can flow through the network at a relatively high rate limited only by the ability of the receiving application 34 to provide receive buffers for receipt of the data. The scaleability is not, therefore, compromised by the need to provide substantial amounts of hardware memory per application in the computer 35.
Although the transmission of data by DMA has been described, PIO transfer may also be made available. In this case, the sending application 31 or the library 51 may choose to transmit data using PIO or by DMA request from the NIC 32. By choosing which form of data transfer is used, substantial performance benefits may be achieved. For example, whereas DMA transfer is efficient for larger blocks of data, PIO transfer is more efficient for relatively small transfers. The decision as to which form of transfer to be used may therefore be taken by the application 31 or by the library 51 in
<Desc/Clms Page number 14>
accordance with the size of the data to be transferred. For consistency, pointer updates for DMA and PIO transfer have to be updated by DMA or PIO transfer, respectively.
Because the transmit and receive queues are provided in software, further information may be added to the queues without requiring any modification to the hardware.
Examples of such further information are length of data in a buffer, immediate data which do not required the use of receive buffers, and error messages.

Claims (31)

CLAIMS:
1. A method of asynchronous data transfer using network shared memory from a first endpoint application to a second endpoint application via a network by which network the first and second applications are separated, in which method: the second application writes across the network, in the network shared memory of the first application, a first descriptor indicating the final endpoint destination address of a receive buffer in the second application available for receiving data; and when the first application has data for transmission to the second application, the first application checks the shared memory for the first descriptor and, if present, transmits at least part of the data via the network to the final endpoint destination address in the second application.
2. A method as claimed in claim 1, in which the network is a memory-mapped network.
3. A method as claimed in claim 1 or 2, in which each of the first and second applications is a user-level application.
4. A method as claimed in claim 3, in which the shared memory is user-level memory.
5. A method as claimed in any one of the preceding claims, in which the first descriptor comprises the size of the receive buffer.
6. A method as claimed in any one of the preceding claims, in which the first and second application run on different computers.
<Desc/Clms Page number 16>
7. A method as claimed in any one of the preceding claims, in which the shared memory contains a receive queue for a plurality of first descriptors received from the second application, each of the first descriptors indicating the final endpoint destination address of a respective receive buffer in the second application available for receiving data.
8. A method as claimed in claim 7, in which the first application stores a read pointer for the receive queue.
9. A method as claimed in claim 8, in which the shared memory at the second application contains a copy of the receive queue read pointer.
10. A method as claimed in any one of claims 7 to 9, in which the shared memory at the first application stores a write pointer for the receive queue.
11. A method as claimed in claim 10, in which the second application contains a copy of the receive queue write pointer.
12. A method as claimed in claim 10 or 11 when dependent on claim 8 or 9, in which the first application compares the receive queue read and write pointers to check for the presence of the first descriptors.
13. A method as claimed in any one of claims 7 to 12, in which, if the first application determines that there are no or insufficient first descriptors in the receive queue, the first application generates and stores a first code representing the arrival of a predetermined first descriptor indicating the presence of sufficient receive buffer space in the second application for a data transfer from the first application, a first comparator compares the first code with addresses in data supplied to the shared memory of the first application to detect the arrival of the predetermined first descriptor, and, in response to detection of the arrival of the predetermined first descriptor, the application institutes the data transfer.
<Desc/Clms Page number 17>
14. A method as claimed in claim 13 when dependent on claim 10 or 11, in which the first code represents the arrival of a predetermined receive queue write pointer update from the second application.
15. A method as claimed in any one of the preceding claims, in which the first application contains a transmit queue for a plurality of second descriptors, each of which indicates the source endpoint address of a respective transmit buffer in the first application containing data for transmission to the second application.
16. A method as claimed in claim 15, in which each of the second descriptors comprises the size of the data contained in the corresponding transmit buffer.
17. A method as claimed in claim 15 or 16, in which the shared memory at the first application stores a read pointer for the transmit queue.
18. A method as claimed in any one of claims 15 to 17, in which the first application stores a write pointer for the transmit queue.
19. A method as claimed in claim 18 when dependent on claim 17, in which a network interface controller for the first application compares the transmit queue read and write pointers to check for the presence of data for transfer to the second application.
20. A method as claimed in any one claims 15 to 19, in which, if the transmit queue becomes full, the first application generates and stores a second code representing the transfer of data from the first application, a second comparator compares the second code with addresses in data supplied to the first application to detect a predetermined data transfer from the first application, and, in response to detection of the data transfer, permits entry of at least one further second descriptor in the transmit queue.
21. A method as claimed in claim 20 when dependent on claim 17, in which the second code represents a predetermined transmit queue read pointer update.
<Desc/Clms Page number 18>
22. A method as claimed in any one of the preceding claims, in which the first application transmits data as packets, each of which contains sufficient final destination address information to identify the destination receive buffer in the second application.
23. A method as claimed in any one of the preceding claims, in which the first application compares the size of the data for transmission with the size of the receive buffer and, if the receive buffer is too small for the data, sends a part of the data which is less than or equal to the size of the receive buffer.
24. A method as claimed in any one of the preceding claims, in which the first application selects between programmed input output and direct memory access for each data transmission.
25. Computer software for performing a method as claimed in any one of the preceding claims.
26. A communication system arranged to perform a method as claimed in any one of claims 1 to 24.
27. A first endpoint application having a network shared memory and arranged: to receive in the shared memory a descriptor from a second endpoint application indicating the final endpoint destination address of a receive buffer in the second application available for receiving data; and when the first application has data for transmission to the second application, to check the shared memory for the descriptor and, if present, to transmit at least part of the data to the final endpoint destination address via a network.
28. A computer programmed to run an application as claimed in claim 27.
<Desc/Clms Page number 19>
29. A program for controlling a computer to run an application as claimed in claim 27.
30. A network software library containing a program as claimed in claim 29.
31. A medium containing a program as claimed in claim 29 or a library as claimed in claim 30.
GB0111624A 2001-05-12 2001-05-12 Data transmission via a network, using shared memory Withdrawn GB2375408A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0111624A GB2375408A (en) 2001-05-12 2001-05-12 Data transmission via a network, using shared memory
AU2002242863A AU2002242863A1 (en) 2001-05-12 2002-03-25 Data transmission via a network
PCT/GB2002/001455 WO2002093395A2 (en) 2001-05-12 2002-03-25 Data transmission via a network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0111624A GB2375408A (en) 2001-05-12 2001-05-12 Data transmission via a network, using shared memory

Publications (2)

Publication Number Publication Date
GB0111624D0 GB0111624D0 (en) 2001-07-04
GB2375408A true GB2375408A (en) 2002-11-13

Family

ID=9914508

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0111624A Withdrawn GB2375408A (en) 2001-05-12 2001-05-12 Data transmission via a network, using shared memory

Country Status (3)

Country Link
AU (1) AU2002242863A1 (en)
GB (1) GB2375408A (en)
WO (1) WO2002093395A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013052695A1 (en) * 2011-10-04 2013-04-11 Qualcomm Incorporated Inter-processor communication apparatus and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0164972A2 (en) * 1984-06-08 1985-12-18 AT&T Corp. Shared memory multiprocessor system
US5448708A (en) * 1992-10-30 1995-09-05 Ward; James P. System for asynchronously delivering enqueue and dequeue information in a pipe interface having distributed, shared memory
US5652885A (en) * 1993-05-25 1997-07-29 Storage Technology Corporation Interprocess communications system and method utilizing shared memory for message transfer and datagram sockets for message control
US5708795A (en) * 1993-03-18 1998-01-13 Fujitsu Limited Asynchronous access system for multiprocessor system and processor module used in the asynchronous access system
EP1102171A2 (en) * 1999-11-22 2001-05-23 Texas Instruments Incorporated Universal serial bus network peripheral device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0164972A2 (en) * 1984-06-08 1985-12-18 AT&T Corp. Shared memory multiprocessor system
US5448708A (en) * 1992-10-30 1995-09-05 Ward; James P. System for asynchronously delivering enqueue and dequeue information in a pipe interface having distributed, shared memory
US5708795A (en) * 1993-03-18 1998-01-13 Fujitsu Limited Asynchronous access system for multiprocessor system and processor module used in the asynchronous access system
US5652885A (en) * 1993-05-25 1997-07-29 Storage Technology Corporation Interprocess communications system and method utilizing shared memory for message transfer and datagram sockets for message control
EP1102171A2 (en) * 1999-11-22 2001-05-23 Texas Instruments Incorporated Universal serial bus network peripheral device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013052695A1 (en) * 2011-10-04 2013-04-11 Qualcomm Incorporated Inter-processor communication apparatus and method
US8745291B2 (en) 2011-10-04 2014-06-03 Qualcomm Incorporated Inter-processor communication apparatus and method

Also Published As

Publication number Publication date
WO2002093395A3 (en) 2003-12-31
GB0111624D0 (en) 2001-07-04
WO2002093395A2 (en) 2002-11-21
AU2002242863A1 (en) 2002-11-25

Similar Documents

Publication Publication Date Title
KR100817676B1 (en) Method and apparatus for dynamic class-based packet scheduling
EP1856623B1 (en) Including descriptor queue empty events in completion events
US7610413B2 (en) Queue depth management for communication between host and peripheral device
EP1856610B1 (en) Transmit completion event batching
US5752078A (en) System for minimizing latency data reception and handling data packet error if detected while transferring data packet from adapter memory to host memory
US7219121B2 (en) Symmetrical multiprocessing in multiprocessor systems
US6904040B2 (en) Packet preprocessing interface for multiprocessor network handler
US7124211B2 (en) System and method for explicit communication of messages between processes running on different nodes in a clustered multiprocessor system
US7111092B1 (en) Buffer management technique for a hypertransport data path protocol
US7295565B2 (en) System and method for sharing a resource among multiple queues
EP0889622B1 (en) Apparatus and method for remote buffer allocation and management for message passing between network nodes
EP0249116B1 (en) Method for controlling data transfer buffer
US7457845B2 (en) Method and system for TCP/IP using generic buffers for non-posting TCP applications
CZ20032079A3 (en) Method and apparatus for transferring interrupts from a peripheral device to a host computer system
EP1866926B1 (en) Queue depth management for communication between host and peripheral device
US6108694A (en) Memory disk sharing method and its implementing apparatus
JP2004046372A (en) Distributed system, resource allocation method, program, and recording medium with which resource allocation program is recorded
GB2375408A (en) Data transmission via a network, using shared memory
KR20010095103A (en) An intelligent bus interconnect unit
JPH11167468A (en) Data transfer device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)