US20130036243A1 - Host-daughtercard configuration with double data rate bus - Google Patents

Host-daughtercard configuration with double data rate bus Download PDF

Info

Publication number
US20130036243A1
US20130036243A1 US13/632,721 US201213632721A US2013036243A1 US 20130036243 A1 US20130036243 A1 US 20130036243A1 US 201213632721 A US201213632721 A US 201213632721A US 2013036243 A1 US2013036243 A1 US 2013036243A1
Authority
US
United States
Prior art keywords
host
dma read
read requests
dma
gdf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/632,721
Inventor
James Everett Grishaw
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US13/632,721 priority Critical patent/US20130036243A1/en
Publication of US20130036243A1 publication Critical patent/US20130036243A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/12Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor
    • G06F13/124Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine
    • G06F13/128Program control for peripheral devices using hardware independent of the central processor, e.g. channel or peripheral processor where hardware is a sequential transfer control unit, e.g. microprocessor, peripheral processor or state-machine for dedicated transfers to a network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Systems (AREA)

Abstract

A double data rate bus system includes a host-network interface card configuration wherein the host is configured to recognize the network interface card to establish a double data rate bus between the host and the network interface card. The host is configured to generate a plurality of generic data frame queues. Each of the generic data frame queues is configured to receive and to transmit generic data frames via the double data rate bus. The network interface card is configured to transmit a plurality of dynamic memory access read requests to the host via the double data rate bus. The host is configured to allow each of the plurality of dynamic memory access read requests to remain pending prior to responding to anyone of the plurality of dynamic memory access read requests.

Description

    PRIORITY CLAIM
  • This application is a continuation application of U.S. Non-Provisional application Ser. No. 12/339,732, filed Dec. 19, 2008 (now U.S. Pat. No. 8,281,049). The contents of U.S. Non-Provisional application Ser. No. 12/339,732 (now U.S. Pat. No. 8,281,049) are incorporated by reference in their entirety.
  • FIELD
  • The present disclosure relates generally to network interface cards.
  • BACKGROUND
  • Host platforms may communicate with corresponding networks through network interface cards. The network interface cards may be implemented as peripheral devices such as daughtercards. The daughtercards may manage data flowing to and from the host platform. The host platform and daughtercard may communicate with one another to indicate current conditions pertaining to the network and the host platform. The network and host platform may utilize the daughtercard to perform data transactions with one another.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views.
  • FIG. 1 depicts a block diagram of an example data bus for a host-daughtercard configuration.
  • FIG. 2 depicts a block diagram of an example host-daughtercard system.
  • FIG. 3 depicts a block diagram of an example daughtercard configuration.
  • FIG. 4 depicts a flow diagram of an example operation of performing direct memory access transactions.
  • FIG. 5 depicts a flow diagram of an example operation of utilizing a plurality of generic data frame queues.
  • FIG. 6 depicts an example of a bus protocol used to transmit and receive data from a plurality of generic data frame queues.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS Overview
  • According to one aspect of the disclosure, a double data rate bus system may include a host-network interface card configuration where the host is configured to recognize the network interface card to establish a double data rate bus between the host and the network interface card. The host may be configured to generate a plurality of generic data frame queues. Each of the generic data frame queues is configured to receive and to transmit generic data frames via the double data rate bus.
  • According to another aspect of the disclosure, the network interface card may be configured to transmit a plurality of dynamic memory access read requests to the host via the double data rate bus, and where the host is configured to allow each of the plurality of dynamic memory access read requests to remain pending prior to responding to anyone of the plurality of dynamic memory access read requests.
  • According to another aspect of the disclosure, a method of operating a double data rate bus system may include passing a first generic data frame via a double data rate bus. The first generic data frame may correspond to a first generic data frame queue. The method may further include suspending the passing of the first generic data frame based on a first predetermined condition. The method may further include passing a second generic data frame via the double data rate bus upon suspension of the passing of the first generic data frame. The second generic data frame may correspond to a second generic data frame queue. The method may further include resuming the passing of the first generic data frame via the double data rate bus upon completion of transmission of the second generic data frame.
  • According to another aspect of the disclosure, a computer-readable medium encoded with computer executable instructions that are executable with a processor. The computer-readable medium may comprise instructions executable to pass a first generic data frame via a double data rate bus. The first generic data frame may correspond to a first generic data frame queue. The computer readable medium may further include instructions executable to suspend the passing of the first generic data frame based on a first condition. The computer readable medium may further include instructions to transmit a second generic data frame via the double data rate bus upon suspension of passing the first generic data frame. The second generic data frame may correspond to a second generic data frame queue. The computer readable medium may further include instructions to resume passing the first generic data frame via the double data rate bus upon completion of transmission of the second generic data frame.
  • Example Embodiments
  • In one example, a host-network interface card arrangement may be used to communicate with a network, such as a wide area network. The host may communicate with a network through a network interface card. The network interface card and host may communicate with one another over a double data rate bus. In one example, multiple generic data frame queues may be established in a host allowing generic data frames to be transmitted and received by the generic data frame queues via the double data rate bus. In another example, the host may receive a plurality of dynamic memory access read requests from the network interface card via the double data rate bus. The received plurality of dynamic memory access read requests may remain outstanding prior to the host responding to any of the outstanding dynamic memory access read requests.
  • In one example shown in FIG. 1, a daughtercard 10 may be connected to a host platform 14. The example shown in FIG. 1 may be used in a routing platform, which may include any component such a router, bridge, switch, layer 2 or layer 3 switch, gateway, etc., that refers to components utilized to implement connectivity within a network or between networks.
  • The daughtercard 10 may be a plug-in module that provides a wide area network (WAN) interface to any routers or routing devices that may be interconnected to the daughtercard 10. FIG. 1 shows a block diagram of an example system that includes a daughtercard 10 that may be referred to as an enhanced high-speed wide-area-network interface card (EHWIC) that may support an 8-bit double-data rate (DDR) bi-directional bus 12. The EHWIC 10 may communicate with a host platform 14 via the DDR bus 12. Signals depicted in FIG. 1 are: TxD[7:0]—transmit data bus from host; TxCtrl—transmit control bit from host; TxClk—transmit clock from host; RxD[7:0]: receive data bus to host; RxCtrl—receive control bit to host; and, RxClk: receive clock to host.
  • In one example the DDR bus 12 is a synchronous bus. The DDR bus 12 may be used to: 1) provide a high-speed data path between the host platform 14 and the EHWIC 10 for WAN data; 2) provide access to the on-board registers; and, 3) to provide a direct memory access (DMA) of the host platform 14 for the EHWIC 10 on-board devices. In one example, TxClk and RxClk may run at 50 MHz. The DDR mode allows data to be captured on every clock edge. In one example, the DDR mode provides aggregate bandwidth of approximately 800 Mbps (400 Mbps in each direction).
  • In one example, address pins ADDR[7:0] of a legacy Host/WIC parallel interface may be appropriated as the TxD[7:0] host to data bus in the DDR bus 12 of the Host/HWIC interface. Also, data pins of a data bus DATA[7:0] of the legacy Host/WIC parallel interface may be appropriated as the RXD[7:0] data bus to host in the DDR bus of the Host/HWIC interface. Additionally, an enable signal CS—L of the legacy interface may be appropriated as the TxCtrl pin of the Host/HWIC interface and a read signal RD—L pin of the legacy Host/WIC interface may be appropriated as the RxCtrl pin of the Host/HWIC interface. Further, legacy Host/WIC parallel interface echo clock pins have been appropriated as the TxClk and RxClk pins in the DDR bus 12 of the Host/HWIC interface.
  • In one example, an EHWIC interface may be plug compatible with a legacy WIC/Host interface. However, the functionality of some of the pins may differ in this implementation. In the presently-described example, the parallel port pins and the “Echo Clock” (TxCE) pins of the legacy Host/WIC parallel interface may be cannibalized for the EHWIC high-speed bus, the DDR bus 12. This provides for backwards compatibility by still leaving serial peripheral bus (SPI) lines (e.g., SPI bus 30 in FIG. 3), serial communication controllers (SCC's) capable of handling multiple protocols such as high-level data link control (HDLC), universal asynchronous receiver/transmitter (UART), asynchronous HOLC, transparent mode, or binary synchronous communication (BiSYNC), for example, and interrupt lines, etc. available for conventional uses in the legacy Host/WIC parallel interface.
  • FIG. 2 is a block diagram of a host-EHWIC system. In FIG. 2 the host platform 14 includes host memory 16 and a central processing unit (CPU) 18 coupled to a host termination logic block 20 including an EHWIC interface 22. The host termination logic 20 may include logic (for example in the form of an field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) which resides on the host platform 14 and serves as an interface between the HWIC DDR bus 12 and the rest of the devices on the motherboard.
  • The EHWIC 10 may include an EHWIC termination logic block 24 including an EHWIC interface 26. The EHWIC termination logic 24 may include logic (for example in the form of an FPGA or ASIC), which resides on the EHWIC 10 and serves as an interface between the DOR bus 12 and other devices on the motherboard. The EHWIC interface 26 may be coupled to a 68-pin connector 28, which may be pin compatible with a legacy EHWIC connector (not shown).
  • FIG. 3 is a block diagram of the example EHWIC interface 26 coupled to the connector 28. The EHWIC 10 side may include a SPI bus 30 connected to a cookie 32, which in one example may be a non-volatile memory, such as a non-volatile RAM (NVRAM) in the form of an EEPROM, storing information about a particular implementation. The DDR bus 12 is coupled to the EHWIC termination logic 24 via the EHWIC interface 26, which may include a power pin (not shown) for supplying power to the EHWIC 10. In one example, the EHWIC termination logic 24 resides in a FPGA 34 having a set of configurable registers 36. In one example, the registers 36 may be used to configure the EHWIC 10.
  • One component of backward compatibility is providing the host platform 14 with a system for determining whether a legacy or upgraded daughtercard has been connected. In one example, this function may be required because, although the parts are pin compatible, certain pins are used to perform completely different functions. In one example, an EHWIC 10 may be plugged into an older host platform, which, in one example, may be the host platform 14, where the old host platform 14 may still access the cookie 32 on the EHWIC 10 via the SPI lines 30 and determine that an inserted WIC, such as the EHWIC 10, is not supported. The EHWIC 10 may be required to not drive the Rx lines of the DDR bus 12 until “enabled”, so that the WIC parallel port bus pins will not cause bus contention in the event that an EHWIC 10 is inserted into an older host platform 14. Also, the legacy SCC's may still be available on an EHWIC 10 and can be used for purposes such as management channels.
  • Two types of frames may be used for communication between the EHWIC 10 and the host platform 14 via the DDR bus 12: control frames and data frames. Data frames may used to pass the larger packets of data between the host platform 14 and the EHWIC 10, en route to and from the line interface. The control frames may be smaller in nature, and perform administrative functions, which may pre-empt the data frames in order to reduce latency. In one example, the control frame formats may be generated and received in hardware, whereas the data frame formats may be determined by the host CPU 18 (with the exception of direct memory access (DMA) frames noted below.
  • The control bits (RxCtrl, TxCtrl) may distinguish data frames from control frames: TxCtrl, RxCtrl=“0”: indicates that streaming data is being passed, TxCtrl, RxCtrl=“1”: indicates that control information is being passed. In one example, the Tx and Rx buses may continuously be transmitting bytes, such as control bytes, data bytes, or idle bytes.
  • In one example the EHWIC 10 and the host platform 14 may communicate with one another through a frame-based protocol. The frame-based protocol may implement at least two types of data frames, such as DMA data frames and generic data frames (GDFs). In one example, both DMA data frames and GDFs may be transmitted if and only if the respective control bit (TxCtrl or RxCtrl) is “0”; all data frames may carry a CRC8 as the last byte; and the DMA data frames and GDFs may start with an encapsulation header. The GDFs may begin with the 0×12 byte. The DMA data frames may begin with the 0×61, 0×62, 0×66, or 0×67.
  • In one example, a DMA data frame may serve as a vehicle for EHWIC-initiated DMA transactions. The DMA data frame may allow the EHWIC 10 to read and write to host memory. DMA data frames may be processed entirely in hardware, so it is not necessary for a host processor, such as the host CPU 18, to be involved in these transactions. For example, simple register accesses may be inefficient for reading blocks of registers so DMA frames are utilized to transfer blocks or register data from the EHWIC 10 to the host 14 over the DDR bus 12. The EHWIC 10 requires data structures set up in host processor memory to support its GDF transmit and receive operations. All of these data structures are shared by the host CPU 18 and the host termination logic 20 through DMA accesses.
  • The GDFs may be an implementation-specific frame agreed upon between host driver software and the EHWIC termination logic 24. For example, a data frame may be an internet protocol (IP) packet or asynchronous transfer mode (ATM) cell that the EHWIC termination logic 24 sends to or receives from a physical layer interface device for data transfer device on the EHWIC 10, for example. In another example, there may be an encapsulation, such as a header with an 8-bit port number indicating which physical layer interface device the EHWIC termination logic 24 sends/receives the packet from. One purpose of the GDF may be to allow flexibility to create whatever frame format will best suit a specific EHWIC being designed.
  • The DMA frames and GDFs may each be processed in a different manner. The DMA data frames may originate in hardware (on the EHWIC 10 if it is a DMA request, or on the host if it is a DMA response). Upon receipt they are also processed entirely in hardware, leaving the host CPU completely uninvolved in the transaction (until perhaps the very end, after the transaction is completed, when the host is notified via interrupt that a DMA transaction has occurred). FIG. 2 shows an example, in which the EHWIC 10 is configured to generate a plurality of DMA read requests 27 individually designated as DMA read 1 through N. DMA read throughput may be modeled by the following equation:
  • DMA Read Throughput = DMATransaction Size · DMAOutstanding Reads DMATransaction Latency EQN . 1
  • In one example, the plurality of DMA outstanding DMA read requests 27 may be up to 8 (e.g., N may be 8 in FIG. 2), which may result in up to 8 times the DMA read throughput. Thus, once a DMA read request frame is issued, the EHWIC 10 allows additional DMA read requests 27 to be issued while other DMA read response(s) are still pending in the host platform 14. At any point in time, an EHWIC 10 may be allowed a maximum of 8 pending EHWIC DMA read responses.
  • DMA Outstanding Reads of Eqn. 1 may be the number of outstanding reads that the EHWIC 10 has issued at any point in time. If a host system returns read responses faster than an EHWIC 10 can issue DMA read requests 27, the EHWIC 10 will only have (at most) one outstanding read at any point in time, therefore no DMA Read Throughput increase will be afforded. Likewise if the EHWIC 10 is only fast enough to sustain a couple of outstanding reads, the DMA Read Throughput will only be increased by that proportional amount (×2).
  • EHWIC DMA read responses by the host platform 14 may be required to be returned in the order that the DMA read requests 27 are issued. If any DMA reads requests 27 are re-ordered outside of the EHWIC host termination logic 24 (e.g., in a system controller), the DMA read requests 27 must be returned to the original (request) order before the read responses are sent over the DDR bus 12.
  • A DMA write frame (not shown) followed by a DMA read request 27 may occur in a serial fashion due to the nature of the DMA Write Frame (as previously noted). However, the host termination logic 20 may permit a DMA write frame to immediately follow a DMA read request 27, even before a DMA read-response frame (not shown) has been sent. The DMA read requests 27 may be interleaved with DMA write frames in any manner and regardless of the arrival of read response frames as long as the maximum number of outstanding DMA read requests 27 is not exceeded. This may allow DMA operations to more fully utilize the DDR bus 12 by offering maximum flexibility in the movement of DMA data across the DDR bus 12 in both directions.
  • FIG. 4 shows a flowchart depicting an example operation of performing DMA transactions via a DDR bus. The operation may include an act 100 of generating a plurality of DMA read requests. In one example, act 100 may be performed with a configuration such as that shown in FIG. 2. The EHWIC 10 may generate a plurality of DMA read requests 27. The operation may also include an act 102 of serially transmitting the DMA read requests via a DDR bus. In the example of FIG. 2, act 102 may be performed by the EHWIC 10 by transmitting the DMA read requests 27 via the DDR bus 12 to the host platform 14.
  • The operation may further include an act 104 of receiving each of the plurality of DMA read requests prior to generating a DMA read response for at least one of the DMA read requests. In the example of FIG. 2, act 104 may be performed by the host platform 14 receiving each of the plurality of DMA read requests 27 prior to a DMA read responses being generated by the host platform 14. In one example, the host platform 14 may have up to 8 outstanding DMA read requests 27 prior to generating a DMA read response.
  • GDFs may be processed substantially in software on the host side. Any special encapsulations for transmit frames (outside of the initial 0×12 byte) must be created by the host processor. And likewise, received frames are also entirely processed by the host processor (after the leading 0×12 byte and trailing CRCS byte is removed (see FIG. 6)).
  • On the EHWIC side, GDFs may be processed in hardware (unless a processor resides on the EHWIC 10), which has carnal knowledge of the EHWIC-specific GDF, that has been agreed upon between the host CPU 18 and hardware of the EHWIC 10. The flexibility of the types of data frames is different. Since the DMA data frame is processed entirely in hardware, it is not flexible and may remain exactly the same format from EHWIC to EHWIC.
  • Since the GDF is created and parsed by the host CPU 18, the format of the GDF is extremely flexible. It is intended that the host CPU 18 will choose GDF that will facilitate design of each particular EHWIC 10.
  • In one example, the EHWIC 10 may be configured to operate with a plurality of GDF rings 29 in the host platform 14 as shown in FIG. 2. The plurality of GDF rings 29 may allow proper prioritization of different classes of service, e.g., voice, video, management, and data. The GDF rings 29 exist in both receive and transmit directions, and the GDFs 31 may include additional per queue DDR bus flow control features. In FIG. 2, the host platform 14 is shown as including the plurality of GDF rings 29 individually denoted, as GDF rings 0 through 3, which may each be considered a dedicated GDF queue.
  • FIG. 5 shows a flowchart depicting an operation of utilizing a plurality of GDF queues generated in a host platform. In the example configuration shown in FIG. 2, GDFs may be passed via the DDR bus 18 allowing the GDF rings 29, or queues, or transmit or receive GDFs. An act 200 may include serially receiving or transmitting each of a plurality of GDFs via a DDR bus. In one example, act 200 may be performed using a configuration shown in FIG. 2 in which GDFs 31 may be transmitted and received by the host platform 14. The host platform 14 may include a plurality of GDF rings 29 in the host termination logic 20. Each GDF ring 0 through 3 may receive and transmit a GDF via the DDR bus 12.
  • The operation may further include an act 202 of suspending each GDF receipt or transmission if congestion is occurring during the respective receipt or transmission. For example, in FIG. 2, a GDF 31 may be received by the GDF ring 0. This receipt may become congested if the GDF ring 0 is becoming full. This receipt may be suspended, allowing another GDF ring 29, such as GDF ring 1 through 3, to receive GDFs, while experience uncongested conditions. The suspension may occur through a control byte being transmitted from the host platform 14 to the EHWIC 10. The suspension may occur accordingly for the other GDF rings 29. Similarly, this may occur in the transmit direction. For example, a GDF 31 may be transmitted by a GDF ring 29. Transmission may need to be suspended if network congestion is occurring. The host platform 14 may be made aware through a control byte transmitted by the EHWIC 10 via the DDR bus 12.
  • The operation may further include an act 204 of resuming each suspended GDF receipt or transmission if a suspended GDF receipt/transmission is no longer congested and no other GDF receipt or transmission is currently occurring. For example, in FIG. 2 if a GDF 31 receipt or transmission is currently suspended in any of the GDF rings 0 through 3, a receipt or transmission, respectively, may be resumed for each ‘suspended GDF ring 0 through 3 if the suspended GDF ring 0 through 3 is no longer congested, or full, and no other GDF ring 29 is currently receiving or transmitting a GDF. In another example, the GDF rings 29 may be prioritized with respect to one another allowing certain GDF rings 29 to have resume priority over the other GDF rings 29.
  • FIG. 6 shows an EHWIC GDF. The following are bus codes that may be used with regard to the GDF rings (queues) 29:
  • Command Opcode 0×12: GDFs passed between the EHWIC 10 and the host 14 may begin with a 0×12 byte in order to indicate GDF encapsulation. The 0×12 Command Opcode may be followed by a GDF queue byte.
  • GDF Queue: The GDF queue field indicates which GDF queue a corresponding data frame belongs to, and whether or not the corresponding data frame is a continuation, which may be a resumption of a GDF frame previously suspended, such as through a suspend transmit request for GDF queue control byte. The byte encodings are as follows:
  • Bit 7: Frame Resume Indicator: Indicates whether a GDF frame is a continuation.
  • 0×0—This is the beginning of a new GDF frame (not previously suspended by a “Suspend Transmit Request for GDF Queue” control byte, followed by 0×F5 “Frame Suspend Indicator”).
  • 0×1—This GDF frame is a continuation (frame previously suspended by a “Suspend Transmit Request for GDF Queue” control byte, followed by 0×F5 “Frame Suspend Indicator”).
  • The Frame Resume Indicator bit may only be used for error detection; e.g. if the host termination logic 20 receives a continuation frame when new frame is expected, this will be reported as an error event.
  • Bits 6-2: Unused—Set to zeroes.
  • Bits 1-0: GDF Queue: Indicates the GDF ring 29 that the corresponding frame belongs to: 0×0—GDF ring 0; 0×1—GDF ring 1; 0×2 GDF ring 2; and 0×3—GDF ring 3.
  • Data: The “Data” field may be any data of any non-zero length (providing of course that it follows the implementation-specific format agreed upon by host driver software and the EHWIC termination logic 24). For example, this could be an IP packet, ATM cell, or PPP frame, encapsulated with a port number or VC number in the header.
  • Rx Flags: For GDFs passed from the EHWIC 10 to the host platform 14, the upper 2 bits of the Rx Flags byte may be written to a receive buffer descriptor word 1 bits 23-22. This may allow the passing of error I status information that may not be readily available for insertion into the beginning of the GDF, for example line CRC calculations that are not completed until the end of the frame arriving at the a physical layer interface device for data transfer device on the EHWIC 10 that do not store the entire frame before passing it up to the host.
  • The Rx Flags byte is also placed into the receive buffer and counted in the data length field of the receive buffer descriptor, so if the Rx Flags functionality is not needed this byte may be used for frame data as long as the host processor ignores the Rx Flags in the Receive Buffer Descriptor (word 1 bits 23-22).
  • Cyclic redundancy check 8 (CRC8): 8-bit mathematical manner in which to calculate data for corruption, which may be performed on all frame bytes except the CRC8 field itself. Additionally, the CRC8 may not be calculated over any inserted control frames.
  • Control frames may have three principal functions: 1) flow control by means of stop, resume, and transmit control characters, 2) read/write commands utilized to perform the functions of the legacy parallel port, and, 3) interrupt frames. A control frame (or byte) may be transmitted if and only if the respective control bit (TxCtrl or RxCtrl is “1”). A data frame may be transmitted if and only if the respective control bit (TxCtrl or RxCtrl) is “0”.
  • The control frames for implementing the suspend transmit requests and resuming transmit requests for the GDF rings 0 through 3 are below:
  • 0×A0—Suspend Transmit Request for GDF Ring 0: This control character may be sent either by the host platform 14 or by the EHWIC 10, to request that the other party suspend transmitting data frames for GDF ring 0. This is intended for flow control purposes, to prevent the overflow of a first in first out (FIFO) or the GDF ring 0 that is becoming full. Upon receiving a “suspend transmit” request, a transmitting party may send a maximum of 128 more GDF bytes from GDF ring 0 before ceasing transmission of all GDF bytes from the GDF ring 0. After this, the transmitting party may continue transmitting other types of data frames. If the transmitting party is not currently transmitting a GDF from GDF ring 0, it may continue transmitting other types of data frames without interruption. However, it may only transmit up to 128 more bytes from the GDF ring 0 until such time as it receives a resume byte.
  • 0×A3—Resume Transmit Request for GDF Ring 0: This control character may be sent either by the host platform 14 or by the EHWIC 10, to request that the other party resume data frame transmission for GDF ring 0, after transmission has been suspended by the “Suspend Transmit Request for GDF Ring 0” control byte.
  • 0×A5—Suspend Transmit Request for GDF Ring 1: This control character may be sent either by the host platform 14 or by the EHWIC 10, to request that the other party suspend transmitting data frames for GDF ring 1. This is intended for flow control purposes, to prevent the overflow of a FIFO or GDF ring 1 becoming full. Upon receiving a “suspend transmit” request, the transmitting party may send a maximum of 128 more GDF bytes from the GDF ring 1 before ceasing transmission of all GDF bytes from the GDF ring 1. After this, the transmitting party may continue transmitting other types of data frames. If the transmitting party is not currently transmitting a GDF from GDF ring 1, it may continue transmitting other types of data frames without interruption. However, it may only transmit up to 128 more bytes from the GDF ring 1 until such time as it receives a resume byte.
  • 0×A6—Resume Transmit Request for GDF Ring 1: This control character can be sent either by the host platform 14 or by the EHWIC 10, to request that the other party resume data frame transmission for GDF ring 1, after transmission has been suspended by the “Suspend Transmit Request for GDF Ring 1” control byte.
  • 0×A9—Suspend Transmit Request for GDF Ring 2: This control character can be sent either by the host platform 14 or by the EHWIC 10, to request that the other party suspend transmitting data frames for GDF ring 2. This is intended for flow control purposes, to prevent the overflow of a FIFO or GDF ring 2 that is becoming full. Upon receiving a “suspend transmit” request, the transmitting party may send a maximum of 128 more GDF bytes from the GDF ring 2 before ceasing transmission of all GDF bytes from the GDF ring 2. After this, the transmitting party may continue transmitting other types of data frames. If the transmitting party is not currently transmitting a GDF from GDF ring 2, it may continue transmitting other types of data frames without interruption. However, it may only transmit up to 128 more bytes the GDF ring 2 until such time as it receives a resume byte.
  • 0×AA—Resume Transmit Request for GDF Ring 2: This control character can be sent either by the host platform 14 or by the EHWIC 10, to request that the other party resume data frame transmission for the GDF ring 2, after transmission has been suspended by the “Suspend Transmit Request for GDF ring 2” control byte.
  • 0×AC—Suspend Transmit Request for GDF Ring 3: This control character can be sent either by the host platform 14 or by the EHWIC 10, to request that the other party suspend transmitting data frames for the GDF ring 3. This is intended for flow control purposes, to prevent the overflow of a FIFO or the GDF ring 3 that is becoming full. Upon receiving a “suspend transmit” request, the transmitting party may send a maximum of 128 more GDF bytes from the GDF ring 3 before ceasing transmission of all GDF bytes from the GDF ring 3. After this, the transmitting party may continue transmitting other types of data frames. If the transmitting party is not currently transmitting a GDF from the GDF ring 3, it may continue transmitting other types of data frames without interruption. However, it may only transmit up to 128 more bytes from the GDF ring 3 until such time as it receives a resume byte.
  • 0×AF×Resume Transmit Request for GDF Ring 3: This control character can be sent either by the host platform 14 or by the EHWIC 10, to request that the other party resume data frame transmission for the GDF ring 3, after transmission has been suspended by the “Suspend Transmit Request for GDF Ring 3” control byte.
  • 0×F5—Frame Suspend Indicator: This control character may be sent either by the host platform or by the EHWIC 10, in response to a GDF Queue Suspend Transmit request, such as that corresponding to control bytes 0×A0, 0×A5, 0×A9, or 0×AC, in order to indicate that a current GDF has been suspended and not completely transmitted. This indicates to a receiving party that it has received only a partial frame and it should act accordingly (for example, wait for the rest of the frame before checking the CRC8 field or forwarding the frame).
  • The 0×F5 control byte is the end-of-frame indicator for a suspended (partial) frame. After the last byte of the suspended (partial) frame is sent, the 0×F5 control byte should appear before any subsequent an end-of-frame I idle control byte. Once a frame has been suspended with the 0×F5 Frame Suspend Indicator it may be resumed with a Frame Resume Indicator discussed in regard to FIG. 6.
  • The 0×F5 Frame Suspend Indicator may not be sent between the 0×12 EHWIC GDF opcode and the EHWIC Generic Data Frame Queue Byte (see FIG. 6). This is because before the queue byte is sent, the receiving logic cannot determine which queue to suspend. Frame Suspend Indicators sent between the 0×12 EHWIC GDF opcode and the EHWIC GDF Queue Byte should be ignored by the receiving logic, and possibly logged as an error.
  • The host termination logic 20 may only suspend an in-progress frame (via the 0×F5 control byte) to the EHWIC 10 in response to receiving a GDF Queue Suspend Transmit request of some type (control bytes 0×A0, 0×A5, 0×A9, or 0×AC) from the EHWIC 10. However, the EHWIC 10 may use 0×F5 to suspend a GDF “spontaneously”; that is, without having prior received a GDF Queue Suspend Transmit Request from the host. Regardless of whether the EHWIC module chooses to use spontaneous suspend, it may suspend a GDF queue in response to a GDF Queue Suspend Transmit request received from the host termination logic 20, and remain suspended until it receives the respective GDF Queue Resume Transmit Request.
  • The host termination logic 20 is not allowed to spontaneously suspend GDF's, in part because this should never be necessary because the host termination logic 20 controls the order in which data arrives from the host controller (via DMA), and in part in order to keep the EHWIC 10 logic unburdened from receiving spontaneously suspended GDF's.
  • The memory 16 and termination logic 20, 24 is additionally or alternatively a computer readable storage medium with processing instructions. Data representing instructions executable by the programmed CPU 18 and termination logic 20, 24 provided for operating a host platform-daughtercard configuration. The instructions for implementing the processes, methods and/or techniques discussed herein are provided on computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive or other computer readable storage media. Computer readable storage media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like. In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer through a computer network or over telephone lines. In yet other embodiments, the instructions are stored within a given computer, CPU, GPU, or system.
  • In one example, the operations of FIGS. 4 and 5 may be performed through logic encoded on at least one memory and executed on at least one of the associated processors as described in regard to FIGS. 1-3. The logic in each memory is appropriate for the associate processor. Logic encoded in one or more tangible media for execution is defined as the instructions that are executable by a programmed processor and that are provided on the computer-readable storage media, memories, or a combination thereof.
  • Any of the devices, features, methods, and/or techniques described may be mixed and matched to create different systems and methodologies.
  • While the invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.

Claims (20)

1. An apparatus comprising:
a network interface card configured to:
generate a plurality of direct memory access (DMA) read requests;
transmit at least two of the plurality of DMA read requests over a bus to a host without reception of a DMA read response to at least one of the plurality of DMA read requests from the host.
2. The apparatus of claim 1, wherein the network interface card is configured to transmit the at least two of the plurality of DMA read requests over a plurality of transmissions.
3. The apparatus of claim 1, wherein the network interface card is configured to transmit up to a maximum number of DMA read requests that are allowed to be pending with the host without reception of a DMA read response from the host.
4. The apparatus of claim 3, wherein the maximum number of DMA read requests that are allowed to be pending with the host comprises eight DMA read requests.
5. The apparatus of claim 1, wherein the network interface card is further configured to transmit DMA write frames that are interleaved with the plurality of DMA read requests.
6. A system comprising:
a host configured to:
receive a plurality of direct memory access (DMA) read requests over a bus from a network interface card;
generate a DMA read response to at least one of the plurality of DMA read requests after receipt of at least two of the plurality of DMA read requests.
7. The system of claim 6, wherein the host is configured to receive the at least two of the plurality of DMA read requests over a plurality of transmissions.
8. The system of claim 6, wherein the host is configured to receive a maximum number of DMA read requests that are allowed to be pending with the host prior to generation of a DMA read response to at least one of the plurality of DMA read requests.
9. The system of claim 8, wherein the maximum number of DMA read requests that are allowed to be pending with the host prior to generation a DMA read response comprises eight DMA read requests.
10. The system of claim 6, wherein the host is configured to transmit a plurality of DMA read responses to the plurality of DMA read requests in an order corresponding to an order that the host received the plurality of DMA read requests.
11. The system of claim 6, further comprising the network interface card in communication with the host through the bus, wherein the network interface card is configured to:
generate the plurality of DMA read requests; and
transmit at least two of the plurality of DMA read requests over the bus to the host without reception of a DMA read response from the host.
12. The system of claim 11, wherein the network interface card is further configured to transmit the plurality of DMA read requests interleaved with DMA write frames.
13. A method comprising:
generating, with a network interface card, a plurality of direct memory access (DMA) read requests; and
transmitting, with the network interface card, at least two of the plurality of DMA read requests over a bus to a host without receiving a DMA read response from the host.
14. The method of claim 13, wherein transmitting, with the network interface card, at least two of the plurality of DMA read requests comprises:
transmitting, with the network interface card, at least two of the plurality of DMA read requests in a plurality of transmissions.
15. The method of claim 13, wherein transmitting, with the network interface card, at least two of the plurality of DMA read requests comprises transmitting, with the network interface card, up to a maximum number of DMA read requests that are allowed to be pending with the host.
16. The method of claim 15, wherein the maximum number of DMA read requests that are allowed to be pending with the host comprises eight DMA read requests.
17. The method of claim 13, further comprising:
interleaving, with the network interface card, a plurality of DMA write frames with the plurality of DMA read requests; and
transmitting, with the network interface card, the plurality of DMA write frames interleaved with the plurality of DMA read requests.
18. The method of claim 13, further comprising:
receiving, with the host, at least two of the plurality of DMA read requests over the bus from the network interface card; and
generating, with the host, a DMA read response to at least one of the plurality of the plurality of DMA read requests after receiving the at least two of the plurality of DMA read requests.
19. The method of claim 18, wherein receiving, with the host, at least two of the plurality of DMA read requests over the bus from the network interface card comprises: receiving, with the host, a maximum number of DMA read requests that are allowed to be pending with the host; and
wherein generating, with the host, a DMA read response to at least one of the plurality of DMA read requests comprises: generating, with the host, a plurality of DMA read responses to the plurality of DMA read requests after receiving the maximum number of DMA read requests.
20. The method of claim 19, further comprising:
transmitting, with the host, the plurality of DMA read responses in an order corresponding to an order in which the DMA read requests were transmitted.
US13/632,721 2008-12-19 2012-10-01 Host-daughtercard configuration with double data rate bus Abandoned US20130036243A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/632,721 US20130036243A1 (en) 2008-12-19 2012-10-01 Host-daughtercard configuration with double data rate bus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/339,732 US8281049B2 (en) 2008-12-19 2008-12-19 Host-daughtercard configuration with double data rate bus
US13/632,721 US20130036243A1 (en) 2008-12-19 2012-10-01 Host-daughtercard configuration with double data rate bus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/339,732 Continuation US8281049B2 (en) 2008-12-19 2008-12-19 Host-daughtercard configuration with double data rate bus

Publications (1)

Publication Number Publication Date
US20130036243A1 true US20130036243A1 (en) 2013-02-07

Family

ID=42267740

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/339,732 Active 2029-09-09 US8281049B2 (en) 2008-12-19 2008-12-19 Host-daughtercard configuration with double data rate bus
US13/632,721 Abandoned US20130036243A1 (en) 2008-12-19 2012-10-01 Host-daughtercard configuration with double data rate bus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/339,732 Active 2029-09-09 US8281049B2 (en) 2008-12-19 2008-12-19 Host-daughtercard configuration with double data rate bus

Country Status (1)

Country Link
US (2) US8281049B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8954803B2 (en) * 2010-02-23 2015-02-10 Mosys, Inc. Programmable test engine (PCDTE) for emerging memory technologies
KR20190090614A (en) * 2018-01-25 2019-08-02 에스케이하이닉스 주식회사 Memory controller and operating method thereof
US11109395B2 (en) * 2019-02-28 2021-08-31 Huawei Technologies Co., Ltd. Data transmission preemption
CN112256624B (en) * 2020-11-03 2022-09-13 中国人民解放军国防科技大学 DMA communication device, chip, equipment and method for high-speed interconnection network interface chip

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5280623A (en) * 1992-03-04 1994-01-18 Sun Microsystems, Inc. Versatile peripheral bus
US5619728A (en) * 1994-10-20 1997-04-08 Dell Usa, L.P. Decoupled DMA transfer list storage technique for a peripheral resource controller
US6070210A (en) * 1997-01-10 2000-05-30 Samsung Electronics Co., Ltd. Timing mode selection apparatus for handling both burst mode data and single mode data in a DMA transmission system
US6175883B1 (en) * 1995-11-21 2001-01-16 Quantum Corporation System for increasing data transfer rate using sychronous DMA transfer protocol by reducing a timing delay at both sending and receiving devices
US20030023783A1 (en) * 2001-07-26 2003-01-30 International Business Machines Corp. DMA exclusive cache state providing a fully pipelined input/output DMA write mechanism
US20050188122A1 (en) * 2004-02-25 2005-08-25 Analog Devices, Inc. DMA controller utilizing flexible DMA descriptors
US20070174506A1 (en) * 2005-12-29 2007-07-26 Fujitsu Limited Data processing apparatus

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5249280A (en) * 1990-07-05 1993-09-28 Motorola, Inc. Microcomputer having a memory bank switching apparatus for accessing a selected memory bank in an external memory
US5675807A (en) * 1992-12-17 1997-10-07 Tandem Computers Incorporated Interrupt message delivery identified by storage location of received interrupt data
US5784390A (en) * 1995-06-19 1998-07-21 Seagate Technology, Inc. Fast AtA-compatible drive interface with error detection and/or error correction
US5805833A (en) * 1996-01-16 1998-09-08 Texas Instruments Incorporated Method and apparatus for replicating peripheral device ports in an expansion unit
US5978866A (en) * 1997-03-10 1999-11-02 Integrated Technology Express, Inc. Distributed pre-fetch buffer for multiple DMA channel device
US6393457B1 (en) * 1998-07-13 2002-05-21 International Business Machines Corporation Architecture and apparatus for implementing 100 Mbps and GBPS Ethernet adapters
US6883171B1 (en) * 1999-06-02 2005-04-19 Microsoft Corporation Dynamic address windowing on a PCI bus
US6598109B1 (en) * 1999-12-30 2003-07-22 Intel Corporation Method and apparatus for connecting between standard mini PCI component and non-standard mini PCI component based on selected signal lines and signal pins
US6772249B1 (en) * 2000-11-27 2004-08-03 Hewlett-Packard Development Company, L.P. Handheld option pack interface
US7401126B2 (en) * 2001-03-23 2008-07-15 Neteffect, Inc. Transaction switch and network interface adapter incorporating same
TW493119B (en) * 2001-03-28 2002-07-01 Via Tech Inc Method for automatically identifying the type of memory and motherboard using the same
US20020159458A1 (en) * 2001-04-27 2002-10-31 Foster Michael S. Method and system for reserved addressing in a communications network
US6990549B2 (en) * 2001-11-09 2006-01-24 Texas Instruments Incorporated Low pin count (LPC) I/O bridge
US7457845B2 (en) * 2002-08-23 2008-11-25 Broadcom Corporation Method and system for TCP/IP using generic buffers for non-posting TCP applications
US6874054B2 (en) * 2002-12-19 2005-03-29 Emulex Design & Manufacturing Corporation Direct memory access controller system with message-based programming
KR100449807B1 (en) * 2002-12-20 2004-09-22 한국전자통신연구원 System for controlling Data Transfer Protocol with a Host Bus Interface
JP2004318628A (en) * 2003-04-18 2004-11-11 Hitachi Industries Co Ltd Processor unit
US7685436B2 (en) * 2003-10-02 2010-03-23 Itt Manufacturing Enterprises, Inc. System and method for a secure I/O interface
US7181551B2 (en) * 2003-10-17 2007-02-20 Cisco Technology, Inc. Backward-compatible parallel DDR bus for use in host-daughtercard interface
US7617291B2 (en) * 2003-12-19 2009-11-10 Broadcom Corporation System and method for supporting TCP out-of-order receive data using generic buffer
GB0404696D0 (en) * 2004-03-02 2004-04-07 Level 5 Networks Ltd Dual driver interface
US7676814B2 (en) * 2004-03-25 2010-03-09 Globalfoundries Inc. Four layer architecture for network device drivers
US7469267B2 (en) * 2004-06-28 2008-12-23 Qlogic, Corporation Method and system for host device event synchronization
US7688838B1 (en) * 2004-10-19 2010-03-30 Broadcom Corporation Efficient handling of work requests in a network interface device
US7733770B2 (en) * 2004-11-15 2010-06-08 Intel Corporation Congestion control in a network
US8116312B2 (en) * 2006-02-08 2012-02-14 Solarflare Communications, Inc. Method and apparatus for multicast packet reception
US7817558B2 (en) * 2006-05-19 2010-10-19 Cisco Technology, Inc. Flow based flow control in an ethernet switch backplane
US9178839B2 (en) * 2008-07-24 2015-11-03 International Business Machines Corporation Sharing buffer space in link aggregation configurations

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5280623A (en) * 1992-03-04 1994-01-18 Sun Microsystems, Inc. Versatile peripheral bus
US5619728A (en) * 1994-10-20 1997-04-08 Dell Usa, L.P. Decoupled DMA transfer list storage technique for a peripheral resource controller
US6175883B1 (en) * 1995-11-21 2001-01-16 Quantum Corporation System for increasing data transfer rate using sychronous DMA transfer protocol by reducing a timing delay at both sending and receiving devices
US6070210A (en) * 1997-01-10 2000-05-30 Samsung Electronics Co., Ltd. Timing mode selection apparatus for handling both burst mode data and single mode data in a DMA transmission system
US20030023783A1 (en) * 2001-07-26 2003-01-30 International Business Machines Corp. DMA exclusive cache state providing a fully pipelined input/output DMA write mechanism
US20050188122A1 (en) * 2004-02-25 2005-08-25 Analog Devices, Inc. DMA controller utilizing flexible DMA descriptors
US20070174506A1 (en) * 2005-12-29 2007-07-26 Fujitsu Limited Data processing apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
'Data Communications and Networking - Fourth Edition' by Behrouz A. Forouzan, pgs. 318-325, copyright 2007 by the McGraw-Hill Companies, Inc. *
'DMA' article from Ganssle.com, published in Embedded Systems Programming, October 1994. *

Also Published As

Publication number Publication date
US8281049B2 (en) 2012-10-02
US20100161851A1 (en) 2010-06-24

Similar Documents

Publication Publication Date Title
EP1530850B1 (en) Store and forward switch device, system and method
KR101298862B1 (en) Method and apparatus for enabling id based streams over pci express
US7849252B2 (en) Providing a prefix for a packet header
EP1442548B1 (en) A general input/output inteface and related method to manage data integrity
US8782321B2 (en) PCI express tunneling over a multi-protocol I/O interconnect
KR100611268B1 (en) An enhanced general input/output architecture and related methods for establishing virtual channels therein
US7882294B2 (en) On-chip bus
JPH06511338A (en) Method and apparatus for parallel packet bus
US20140307748A1 (en) Packetized Interface For Coupling Agents
US20080294831A1 (en) Method for link bandwidth management
EP0453863A2 (en) Methods and apparatus for implementing a media access control/host system interface
US8166227B2 (en) Apparatus for processing peripheral component interconnect express protocol
US10853289B2 (en) System, apparatus and method for hardware-based bi-directional communication via reliable high performance half-duplex link
US20130036243A1 (en) Host-daughtercard configuration with double data rate bus
US7770095B2 (en) Request processing between failure windows
CN115437978A (en) High-speed peripheral component interconnection interface device and operation method thereof
US7610415B2 (en) System and method for processing data streams
US20030084029A1 (en) Bounding data transmission latency based upon a data transmission event and arrangement
EP1687922B1 (en) Backward-compatible parallel ddr bus for use in host-daughtercard interface
US6693905B1 (en) Data exchange unit
US11782497B2 (en) Peripheral component interconnect express (PCIE) interface device and method of operating the same
CN117692535B (en) PCIe protocol message order preserving device
CN116055422A (en) Device and method for controlling data packet sending sequence
TWI240859B (en) Error forwarding in an enhanced general input/output architecture and related methods

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION