WO1991013397A1 - A method and apparatus for transferring data through a staging memory - Google Patents

A method and apparatus for transferring data through a staging memory Download PDF

Info

Publication number
WO1991013397A1
WO1991013397A1 PCT/US1991/001251 US9101251W WO9113397A1 WO 1991013397 A1 WO1991013397 A1 WO 1991013397A1 US 9101251 W US9101251 W US 9101251W WO 9113397 A1 WO9113397 A1 WO 9113397A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
staging
memory
elements
length value
Prior art date
Application number
PCT/US1991/001251
Other languages
French (fr)
Inventor
Chris W. Eidler
Hoke S. Johnson, Iii
Kaushik S. Shah
Original Assignee
Sf2 Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sf2 Corporation filed Critical Sf2 Corporation
Publication of WO1991013397A1 publication Critical patent/WO1991013397A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/10Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor having a sequence of storage locations each being individually accessible for both enqueue and dequeue operations, e.g. using random access memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Definitions

  • the present invention relates to packet- oriented transfers of data and other information in a computer network. More particularly, the present invention relates to a method and apparatus for staging data in elements of a staging memory and for transferring data between a device interface and the elements of a staging memory via a direct memory access (DMA) channel.
  • DMA direct memory access
  • Packet switching is a known system for transmitting information such as data, commands and responses over a shared bus of a computer system or network by placing the information in packets having a specified format and transmitting each packet as a composite whole.
  • Long transmissions, such as transfers of large amounts of data, are broken up into separate packets to reduce the amount of time that the shared bus is continuously occupied by a single transmission.
  • Each packet typically includes a header of control elements, such as address bits and packet identification bits arranged in predetermined fields, and may further include error control information.
  • One known packet-switching method requires that all packet transmissions occur between a named buffer in a transmitting node and a named buffer in a receiving node.
  • the named buffers are in actual memory at each node.
  • To write data from one node to another the data is placed in packets each labeled in designated fields with the name of the destination buffer in the receiving node and an offset value.
  • the offset value of the packet specifies the location in the receiving buffer, relative to the starting address of the buffer, where the first byte of data in that particular packet is to be stored.
  • a transaction identifier unique to the group of packets also is transmitted in a separate field of each packet. The transaction identifier is used in the process of confirming transmission of the packets.
  • This packet-switching method has considerable drawbacks in that it requires a node to have a named destination buffer in actual memory for receiving packet transmissions, and further requires that the receiving node identify its named destination buffer to the transmitting node prior to a data transfer. It also has the drawback of requiring that the receiving node be responsive to the contents of the destination buffer name field of a transmitted data packet for directing the contents of the packet to the named buffer. These drawbacks are particularly evident if one attempts to impose them on a receiving node which comprises a resource shared by multiple computers in a network. For example, consider a mass storage system acting as a shared resource for several computers in a computer network.
  • the mass storage system must often process data transfer requests from more than one computer concurrently, and the data involved in each of these transfers is often sufficiently large to require that it be divided among several packets for transmission over the network communication bus.
  • the mass storage system may receive packets associated with one data transfer between packets associated with another transfer.
  • the mass storage system has a memory through which data passes in transit between a network communication bus device interface and a mass storage device interface.
  • This memory may also handle packets having control messages directed between a system processor of the mass storage system and other processors (e.g., remote processors on the network bus or other processors in the mass storage system) .
  • the packets containing data or control messages are transferred between the memory and the device interface by one or more DMA channels.
  • DMA channel comprises a high-speed communications interface, including a data bus and control circuitry, for transferring data directly into or out of memory without requiring the attention of a system processor after initial set-up.
  • the mass storage system prior to receiving a data transmission from any one of the computers in the network, were required to allocate a named buffer space in memory to accept the entire data transfer (which may be many packets long) , the concurrent processing of several such data transfer requests would require that the mass storage system concurrently allocate a number of separate named buffer spaces equal to the number of concurrent transfers being processed.
  • This pre-allocation of separate named buffers in the memory of the mass storage system ties up the memory, resulting in inefficient use of available memory and possibly limiting the data throughput of the mass storage system by restricting the number of data requests that can be processed concurrently.
  • Greater efficiency in terms of memory use can be achieved by a more dynamic allocation of memory on a packet-by-packet basis, such that memory space for a particular incoming expected packet is not allocated until the packet is received by the mass storage system. Moreover, efficiency is improved by allowing packets to be stored at any available location in the memory. Such arbitrary, packet-by-packet allocation of memory is particularly suited to the memory of a mass storage system. Unlike transfers of data between actual memory of one computer and actual memory of another computer, transfers of data involving a mass storage system do not use the memory of the mass storage system as a final destination for the data.
  • packets containing data are only passed through the memory in transit between the communication bus of the network and the mass storage device or devices of the system.
  • Data comes and goes through the memory in two directions (i.e., into and out of mass storage) arbitrarily, depending on the demands of the computers in the network and the conditions (e.g. , busy or idle) of the communication bus, the mass storage devices and the data channels leading to the mass storage devices.
  • the amount and specific locations of memory space used at any particular time, and conversely the amount and specific locations available to receive packets continually varies. Particular memory locations arbitrarily cycle between available and unavailable states. In such circumstances, pre-allocation of named buffer spaces in memory is clearly and unnecessarily inefficient.
  • Packet-switching networks are known in the art that do not require a receiving node to identify a named destination buffer prior to transferring a packet from memory to memory. These networks use various methods for directing the contents of packets into the receiving memory such as, for example, by maintaining a software-controlled address table in the memory of the receiving node, the entries of which are used to point to allocated memory locations unknown to the transmitting node.
  • the present invention adopts the principle of such networks in that it is an object of the present invention to provide a method and apparatus for transferring packets between a network communication bus and memory, without allocating or identifying named buffers.
  • known computer systems typically transfer data into and out of contiguous locations in memory to minimize processor interrupts and simplify the transfer process.
  • a separate processor interrupt is usually required to transfer each non-contiguous segment of data into and out of memory.
  • the present invention is an improvemen over such systems in that with respect to the writing of data from memory to a device interface, non ⁇ contiguous segments of data stored in the memory are joined by DMA control logic to form a contiguous DMA data transfer to the device interface, and in that, with respect to the reading of data into memory from the device interface, a contiguous DMA data transfer from the device interface is routed by DMA control logic into selected not necessarily contiguous segment of memory in the staging memory. After initial set-up processor attention is not required in either case to transfer the individual data segments until the entire transfer is completed.
  • a staging memory logically divided into a plurality of addressable elements. Identifiers corresponding to available memory elements are arbitrarily selected by a microprocessor from a pool of such identifiers and are stored in a sequence storage circuit such as a FIFO storage circuit.
  • the present invention is described in the context of a mass storage system that includes a staging memory for transferring data between a networ bus device interface and a mass storage device interface.
  • a packet is to be received by the staging memory from a device interface connected to the network communication bus, an element identifier is accessed from the sequence storage circuit by DMA control, and the packet is stored in the corresponding location in the memory.
  • the logic indicates that the memory element has received a packet, such as by placing a status word corresponding to the element in a storage register and by generating a control signal such as a processor interrupt signal.
  • the packet is then checked by a system processor of the main storage system to determine if it contains data for mass storage.
  • the system processor If the packet does not have data for storage, the system processor notifies other software that a non-data packet has been received. Otherwise, the system processor places information identifying the received packet in a look-up data table. Multiple packets of data can be received into the memory at high speed because the sequence storage circuit can be programmed prior to transfer with multiple element identifiers.
  • Data stored in the memory elements is transferred to mass storage by a snaking operation which requires only a single intervention by the system microprocessor.
  • snaking the inventors mean combining data from non-contiguous memory locations into a single data transmission. This is accomplished by programming a sequence storage circuit with a series of element identifiers corresponding to memory elements having data to be included in the single data transmission. Under the control of logic separate from and set up by the system processor, the data from the corresponding series of elements is read from the memory and assembled into a data stream of predetermined length for DMA transfer to a mass storage device interface in accordance with the programmed order of the element identifiers in the sequence storage circuit.
  • the data stream comprises header fields, data fields and error correction fields.
  • any of these fields may exist in the memory, or may be generated by the DMA control logic as a result of instructions provided to the logic by the system microprocessor during set-up.
  • the control logic pads the last data field in the data stream if necessary to achieve proper block size as defined for transmissions between the memory and the device interface.
  • any of these fields e.g., the header fields
  • the data When data is to be read from mass storage, the data is transferred to the staging memory as a single contiguous DMA data stream from a mass storage device interface.
  • the data stream is divided into segments which are stored in selected not necessarily contiguous memory elements of the staging memory in accordance with a series of element identifiers programmed into a sequence storage circuit by the system processor. This process is referred to herein as "desnaking."
  • the element identifiers correspond to available memory elements and are arbitrarily selected by the system microprocessor from a pool of such identifiers.
  • the data is stored under the control of logic separate from and set up by the system processor, such that system processor intervention is not required after initial set-up until the transfer is completed.
  • the system processor keeps track of which memory elements have been programmed to receive which data segments, and when ready to do so sets up logic to retrieve data segments from the staging memory, assemble them into individual packets and provide them to the network bus device interface for transmission over the network communication bus.
  • FIG. 1 is a block diagram of a mass storage system including a staging memory in accordance with the principles of the present invention
  • FIG. 2 is a block diagram of an embodiment of the present invention, including a staging memory and receive address and receive status FIFO's;
  • FIG. 3 is a diagram showing the format of a typical data packet of the type known in the prior art and suitable for use with the present invention
  • FIG. 4 is a diagram of a data table provided in processor memory to identify memory elements of the staging memory of FIG. 1 that have received data packets from the network communication bus or header/data segments from a mass storage device interface;
  • FIG. 5 is a block diagram of an embodiment of the snaking/desnaking system of the present invention, including the staging memory of FIG. 1 and a snaking/desnaking FIFO; and
  • FIG. 6 is a flow diagram of the states of state machine sequence circuit 506 of FIG. 5 during execution of transfers of data between staging memory 110 and DMA channel 105 in accordance with the principles of the present invention
  • FIG. 7 is a block diagram of an embodiment of the packet transmission system of the present invention, including a staging memory and transmit address and transmit status FIFO's for each network bus device interface; and
  • FIG. 8 is a block diagram of an alternative embodiment of the snaking system of the present invention.
  • FIG. 1 shows a mass storage system 100 that includes one or more mass storage devices 102 (e.g., disk drives or disk drive arrays) and corresponding device interfaces 104 for communicating between devices 102 and other circuitry in mass storage system 100.
  • Mass storage system 100 is connected to a network communication bus 109 via device interfaces 108.
  • a staging memory 110 for temporarily storing information during a data transfer between the mass storage devices 102 and a host computer attached to network communication bus 109. This staging memory is used, for example, to hold data received from one of either device interfaces 104 or 108 pending readiness of another device interface to receive the data.
  • the staging memory 110 receives the data from one of device interfaces 108 and holds the data until it is transferred to one of device interfaces 104.
  • the staging memory 110 receives the data from one of device interfaces 104 and holds the data until it is transferred to one of device interfaces 108.
  • staging memory 110 may also hold data that is transferred between device interfaces of like kind (e.g., a transfer from one of device interface 104 to another of device interface 104) . This same memory may be used for purposes of handling transfers of information other than mass storage data, such as command messages between a host computer connected to network bus 109 and mass storage system 100.
  • Data transfers in mass storage system 100 are controlled by system processor 107 through DMA control logic components 103 and 106.
  • DMA control logic component 103 controls the transfer of data between device interfaces 108 and staging memory 110.
  • DMA control logic components 106 control the transfer of data between device interfaces 104 and staging memory 110.
  • two device interfaces 108 are shown connected to staging memory 110 through a 2:1 multiplexer 111, which in response to a control signal from DMA logic component 103 determines which of the two device interfaces 108 may communicate with staging memory 110.
  • Each of device interfaces 108 includes a port 108a for interfacing with DMA bus 112.
  • staging memory 110 includes a port 110a for interfacing with DMA bus 112.
  • DMA control logic component 103 provides control signals to each of ports 108a and 110a and multiplexer 111 to accomplish transfers of data on DMA bus 112.
  • Each of device interfaces 104 has a port 104a for transmitting or receiving DMA data on either of two DMA buses 105 as determined by the setting of corresponding 2:1 multiplexers 104b. The set-up of each of multiplexers 104b is controlled by two DMA control logic components 106.
  • staging memory 110 includes two ports 110b for communicating with a respective one of DMA buses 105.
  • Each of DMA control logic components 106 provides control signals to ports 104a and 110b and multiplexers 104b to accomplish data transfers on DMA bus 105.
  • system processor 107 has direct access to staging memory 110 via port 110c.
  • System processor 107 also has direct access to device interfaces 104 and 108.
  • DMA control logic component 103 serves the purpose of off ⁇ loading data transfer overhead from system processor 107 in connection with a data transfer between staging memory 110 and one of device interfaces 108 after an initial set-up of DMA control logic component 103 and device interface 108.
  • DMA control logic components 106 serve the purpose of off-loading data transfer overhead from system processor 107 in connection with a data transfer between staging memory 110 and one of device interfaces 104 after an initial set-up of the appropriate DMA logic components 106.
  • FIG. 2 shows a block diagram of an exemplary embodiment of the packet receiving system of the present invention implemented to receive data packets from device interfaces 108 into staging memory 110.
  • Device interfaces 108 receive information over bus 109 in the form of packets, such as the data packet 300 shown in FIG. 3.
  • the format of these packets typically is defined such that each packet has a known size usually indicated in the header field defined below, and includes three fields, including a packet header or identification field 300a, a data field 300b, and a field 300c for validation information (e.g., CRC information) .
  • the actual format of the packets may vary depending on the information processing system in which the packet receiving system of the present invention is used. As will be described in greater detail below, the present invention is capable of accommodating variations in packet size.
  • data packet 300 may be used to transfer control or status information between a computer and the mass storage system, such that the data field of a packet received by device interface 108 may contain information other than mass storage data, such as control or status information.
  • the type of data contained by field 300b e.g., mass storage data, control or status information
  • One such conventional scheme involves transaction-based packet transfers. Each transaction has a number by which it and each packet included in the transaction are referred to. Where a plurality of packets is included in a particular transaction, the order of the mass storage data in the packets is identified by an offset value equal to the number of words or bytes or other data in it by which the beginning of the mass storage data in each packet is offset from the beginning of the mass storage data in the first packet in the transaction.
  • a transaction identification field 302 and an offset value field 304 are shown in data packet 300 as part of identification field 300a.
  • the exemplary embodiment of the packet receiving system of the present invention described herein is discussed in the context of a network using this type of packet reference. As will be apparent to one of skill in the art, however, embodiments of the present invention can be practiced with other packet identification schemes. Moreover, as will also be apparent, the present invention can be practiced without regard to any particular destination buffer address that may be specified in the packet.
  • a transfer of packeted data over a conventional shared system or network bus is initiated by a command packet from a remote central processor to the mass storage system.
  • the command packet typically requests the mass storage system to respond by providing a return packet including, among other information, a receiving address, a source address (provided to the mass storage system by the requesting computer) and a transaction identifier.
  • the remote central processor places the data it seeks to transfer into packets, and places the receiving address and transaction identifier generated by the mass storage system into the corresponding fields of each data packet.
  • the central processor also generates an offset value for each data packet, and typically transmits the data packets in the order of their offset value to the mass storage system.
  • these data packets may be received by the mass storage system interspersed among data packets associated with other transactions.
  • the data fro the received data packets would be placed in contiguou memory locations beginning at the receiving address identified in the address field of the packets plus an offset designated in each packet.
  • the staging memory 110 of the present invention is useful in a mass storage system to allow received data packets to be stored in memory at non ⁇ contiguous locations unknown even symbolically to the remote central processor.
  • Staging memory 110 comprise an addressable memory circuit.
  • the memory elements of staging memory 110 may be implemented using commercially available integrated circuit RAM devices (e.g., devices such as Motorola's 6293 RAM chip).
  • Commercially available register devices also may be used to provide ports 110a, 110b, and 110c. Each port preferably comprises a data latch register, an address counter and a read/write enable register, and may include other logic as may be desired to implement the port functions.
  • staging memory 110 Since the purpose of staging memory 110 is to stage network packets, the memory is logically divided by system processor 107 into a plurality of "staging elements" 200, each of which can be described by an address and a length. In this embodiment, all staging elements are of equal length, that length being the maximum expected packet length. This logical division is accomplished by system processor 107 before mass storage system 100 enters an on-line state. System processor 107 divides the size of the staging memory 110 by the maximum expected packet length to determine the number of staging elements 200, and creates a list SE_FREE_POOL in its memory of the starting addresses of each staging element 200.
  • system processor 107 When a remote central processor initiates a write operation to mass storage system 100, system processor 107 generates and returns to the central processor, as previously described, a packet including a transaction identifier. System processor 107 also places the generated transaction identifier into a memory-resident table for subsequent use, as described hereafter, in completing outstanding transactions after data is placed in staging memory 110. An example of such a table, described in greater detail below, is shown in FIG. 4.
  • system processor 107 programs a sequence storage circuit 202 in DMA control logic 103 with a series of staging element identifiers. These identifiers correspond to individual staging elements of staging memory 110 which are available to receive packets. They are selected by system processor 107 from the available or currently unused staging elements identified on the SE_FREE_POOL list, and are individually accessed by port control hardware 203 to store data packets received by device interfaces 108 into the corresponding staging elements of staging memory 110.
  • Port control hardware 203 comprises logic, which may be conventionally implemented, such as by using discrete logic or programmable array logic, to manipulate the control, address, and data registers of ports 108a and 110a, and to control multiplexer 111, as required by the particular implementation of these circuits for transferring data between device interfaces 108 and staging memory 110.
  • the sequence storage circuit 202 is implemented using a conventional FIFO (first in first out) storage circuit (labeled RCV ADDR FIFO) in which staging element identifiers stored in the circuit are accessed in the same sequence that they are loaded by system processor 107.
  • the sequence in which the programmed identifiers are accessed by port control hardware 203 can be in a different order if desired (e.g., the identifiers can be accessed in reverse order, such as by using a LIFO circuit — last in first out) .
  • the sequence storage circuit can be implemented by circuitry other than a FIFO or LIFO circuit, such as by using RAM or register arrays, or a microprocessor,
  • each staging element identifier includes the starting address SE in staging memory 110 of the corresponding staging element.
  • a short "tag number" RT is joined to the address, and this tag number and the corresponding starting address and length of each staging element loaded into circuit 202 is placed by system processor 107 into a reference table 204.
  • the purpose of the tag number is to provide a short hand reference to each starting address SE loaded into RCV ADDR FIFO 202 for use in generating status words in RCV STATUS FIFO 206.
  • the tag number instead of the actual starting address of the staging element in RCV STATUS FIFO 206, the necessary bit-width of FIFO 206 is kept small. The generation of the status words in RCV status FIFO 206 is described below.
  • Tag numbers are loaded into RCV ADDR FIFO 202 in consecutive order, although another order may be used, as long as the order of tag numbers in RCV ADDR FIFO 202 is reflected by reference table 204.
  • the tag numbers Preferably have values from 0 to (m-1) , where m is a parameter variable equal to the depth, or a portion thereof, of RCV ADDR FIFO 202 (i.e., the number of staging element identifiers that can be loaded into RCV ADDR FIFO 202 at one time) .
  • tag number T may comprise a 4-bit binary number having a value of 0-15.
  • the first staging element address loaded into RCV ADDR FIFO 202 might be assigned a tag number of 0, in which case the second will be assigned 1, etc.
  • the tag number acts as a modulo-16 counter, such that the next tag number used after 15 would be 0.
  • System processor 107 reloads RCV ADDR FIFO 202 with starting addresses of currently available staging elements from the SE_FREE_P00L list as the initially loaded addresses are used by port control hardware 203 to receive data packets arriving at device interfaces 108 from bus 109.
  • System processor 107 updates reference table 204 as the system processor reloads RCV ADDR FIFO 202.
  • the initial loading of staging element identifiers in RCV ADDR FIFO 202 is done when the mass storage system is initialized.
  • staging element when a staging element receives a packet, it becomes unavailable until such time as that packet is transferred from the staging element to a mass storage device interface or is otherwise processed, at which time the staging element returns to an available state and is returned to the SE-FREE-POOL list.
  • system processor 107 keeps track of this cycling process using the SE-FREE-POOL list in order to know which staging elements are available at any given time to load into RCV ADDR FIFO 202.
  • device interface 108 checks and strips the CRC information (e.g., validation field 300c) from packets that it receives from bus 109, such that a data segment comprising the header and data fields from each packet received is stored in staging memory 110. After the data segment from each data packet is received by staging memory 110, port control hardware 203 loads a corresponding status identifier into FIFO circuit 206 (RCV STATUS FIFO) to indicate completion of the packet transfer.
  • CRC information e.g., validation field 300c
  • the status identifier includes a group of STAT bits and the tag number that was assigned in RCV ADDR FIFO 202 to the staging element which received the packet.
  • STAT bits may include, for example, an error bit that indicates whether or not a transmission error was detected by the DMA control logic 103 and a bit indicating which of device interfaces 108 received the packet from bus 109.
  • RCV STATUS FIFO 206 can be implemented using conventional circuitry other than a FIFO circuit.
  • RCV STATUS FIFO 206 Upon transition of RCV STATUS FIFO 206 from an empty to a non-empty state, an interrupt is generated to system processor 107 to indicate that a packet has been received.
  • system processor 107 reads the tag number of the first status identifier in RCV STATUS FIFO 206 and determines the starting address and length of the corresponding staging element from table 204 (it may not be necessary to list the lengths of the staging elements in table 204 if they are all equal, in which case the length may be stored in a single memory location or register which is read by system processor 107) .
  • System processor 107 then places the starting address, length and offset of the packet into table 400 as shown in FIG. 4.
  • the staging element identifier entry in table 204 corresponding to the tag number read from the RCV STATUS FIFO 206 is set to a null value to indicate that there is no longer a valid tag by that number in the DMA control logic 103.
  • Table 400 is indexed according to the transaction identifiers of outstanding transactions, such that for a given transaction identifier, the starting addresses of staging elements having received data packets associated with that transaction are listed by system processor 107 in the order in which the packets of that transaction were received by a device interface 108 or in the order of their offset. Table 400 is used by system processor 107 to complete the transfer of data from staging memory 110 to mass storage device interfaces 104, as described in connection with FIG. 5.
  • new control information such as logical block address and mass storage device number, for internal use by the mass storage system in completing the transfer to mass storage, be stored in a staging element with the data packet. This can be accomplished simply by having system processor 107 write the new control information over selected portions of the original control elements contained in the header field of the packet after the packet has been placed in staging memory 110.
  • such new control information can be added to the packet data field by DMA control logic 106 as the data fields are transferred from staging memory 110 to mass storage device interface 104.
  • system processor 107 accesses the first status identifier in RCV STATUS FIFO 206 in response to an interrupt and places the address of the associated staging element into table 400, system processor 107 checks RCV STATUS FIFO 206 for additional status identifiers, and repeats the accessing process for each such identifier. If there are no more identifiers in RCV STATUS FIFO 206, system processor 107 returns to other operations until interrupted again.
  • a packet arriving into device interface 108 is other than a mass storage data packet, such as a command packet or other type of message
  • the packet is identified by system processor 107 as being something other than mass storage data.
  • the packet is received into staging memory 110 in the same manner as a mass storage data packet except that system processor 107 does not place the corresponding staging element address into table 400. Instead, system processor 107 provides the staging element address containing the packet to other software in the control circuitry of the mass storage system, which in turn processes the packet and ultimately returns the staging element which contained the packet to the SE-FREE-POOL.
  • this system processor When this system processor detects that all mass storage data packets for a particular write transaction have been received from bus 109, it prepares to transfer the mass storage data to one of mass storage device interfaces 104.
  • FIG. 5 illustrates an exemplary embodiment of a "snaking/desnaking" mechanism for transferring data between staging memory 110 and a DMA channel 105 connected to mass storage device interfaces 104.
  • the present invention concerns data transfers in both directions over DMA channel 105.
  • the term "snaking" has been previously described herein. First will be described a method for snaking together data stored in selected staging elements of staging memory 110 to transmit the data as a single contiguous DMA data transfer to one of mass storage device interfaces 104.
  • each stored mass storage data packet is modified by system processor 107 to include a header comprising control and addressing information for use in directing the corresponding mass storage data to a particular logical or physical location in mass storage (as previously stated, this information may be written over the control information originally included in the header field of the packet) .
  • system processor 107 has knowledge of the starting memory addresses, lengths and offset values of the data segments to be snaked together. This can be accomplished, for example, by creating a look-up data table like that shown in FIG. 4 when the data is stored in staging memory 110, in the manner previously described.
  • system processor 107 programs sequence storage circuit 504 of DMA control logic 106 with a series of memory addresses ("SE ADDRESS") corresponding to the starting addresses in memory of the modified header fields contained in each of the selected staging elements.
  • SE ADDRESS series of memory addresses
  • Sequence storage circuit 504 is preferably implemented using a FIFO storage circuit (labeled "SNAKE/DESNAKE FIFO") in which staging memory addresses are programmed and accessed in accordance with the offset values of the data segments contained in the corresponding staging elements.
  • the sequence in which the staging memory addresses are programmed (and thus the sequence in which the contents of corresponding staging elements are transferred) can be varied as desired.
  • Sequence storage circuit 504 may be implemented by circuitry other than a FIFO circuit, such as by using a LIFO circuit, RAM or register arrays, or a microprocessor. After programming sequence storage circuit
  • Port control hardware 506 comprises a state machine sequence circuit and other logic, which may be conventionally implemented, such as by using discrete logic or programmable array logic, to manipulate the control address and data registers of parts 104a and 110b, and to control multiplexer 104b, and may be constructed in any conventional manner to perform the DMA transfer without requiring further attention from system processor 107.
  • a flow diagram 600 illustrating the states of the state sequence circuit of port control hardware 506 as it executes DMA transfers between staging memory 110 and DMA channel 105 is shown in FIG. 6.
  • the states involved in a transfer from staging memory 110 to device interface 104 are shown in the lower portion of FIG. 6, and are generally referred to herein as read sequence 600a.
  • the state machine sequence circuit of port control hardware 506 begins read sequence 600a from an idle state 601 when state machine sequence circuit 506 is initiated by system processor 107 with the loading of data transfer length counter 510.
  • the state machine sequence circuit first loads block length counter 508 with a value equal to the length of each header/data segment (e.g., 520 bytes) in staging memory 110 (excepting fractional data segments) (state 602) .
  • the state machine sequence circuit next causes the port control hardware to generate any control signals that may be necessary to set up DMA channel 105, mass storage device interface port 104a and staging memory port 110b for the DMA transfer (state 604) .
  • the state machine sequence circuit of port control hardware 506 then assembles the selected data segments into a single data stream 512 which is transferred over DMA channel 105 to mass storage device interface 104. This may be accomplished as follows.
  • the state machine sequence circuit causes the first staging memory address in sequence storage circuit 504 to be loaded into address counter 509, which provides staging memory port 110b with staging element addresses for directing header/data bytes out of staging memory 110 (state 606). Header 514 and data field 516, comprising a header/data segment 517, are then transferred from the addressed staging element to DMA channel 105. After each byte is transferred to DMA channel
  • block length counter 508 and data transfer length counter 510 are each decremented by one.
  • transfers between staging memory 110 and DMA channel 105 are described herein as taking place one byte at a time, such that block length counter 508 and data transfer length counter 510 are decremented on a byte- by-byte basis
  • the ports 110b and 104a and DMA channel 105 may be implemented to transfer larger amounts of data in parallel (e.g., longwords) .
  • counters 508 and 510 may be implemented to count longwords or other units rather than bytes.
  • the state machine sequence circuit directs port control hardware 506 to reload block length counter 508 with the header/data segment length value and to cause the next staging memory address to be loaded into address counter 509 from sequence storage circuit 504 to begin the transfer of another header/data segment (state 608) .
  • the state machine sequence circuit of port control hardware 506 causes data validation information ("CRC" 518) to be appended to data field 516 of the first segment in DMA data stream 512 (state 610) . This process (states 606, 608, 610) is repeated until data transfer length counter 510 equals one.
  • block length counter 508 equals one when data transfer counter 510 reaches one, the last byte of data is transferred and each counter is decremented to zero (state 612) . A data validation field is then appended to the just transferred data field (state 614) and the state machine sequence circuit 506 returns to the idle state 601. If block length counter 508 is not equal to one when data transfer length counter 510 equals one (i.e., when the last data byte stored in staging memory 110 is being transferred) , block length counter 508 will have a non ⁇ zero value after the last stored data byte has been transferred and counters 508 and 510 have been decremented.
  • the state machine sequence circuit causes port control hardware 506 to continue transferring bytes of "pad" data on bus 105 as part of the data stream (state 616) .
  • This "pad” data comprises a repeating value known as the pad byte. Pad bytes are transferred until the length of the last transmitted header/data segment is equal to the lengths of the previous header/data segments. This is accomplished by decrementing the block length counter after each pad byte is transmitted, and by continuing the padding operation until the block length counter reaches zero. After the last pad byte is transferred and the block length counter is decremented to zero (state 618) , a data validation field is transmitted (state 614) to complete the DMA data stream from staging memory 110 to device interface 104.
  • a remote central processor on network bus 109 seeks to retrieve data from mass storage 102 (i.e., a read mass storage data operation)
  • mechanisms similar to those described above can be used to route header/data segments 517 from a single contiguous DMA data stream 512 on DMA channel 105 into available staging elements of staging memory 110, and to then transfer the header/data segments in packet form from staging memory 110 to network communication bus 109 via device interface 108.
  • the read operation is initiated by a command packet from the remote central processor that provides, among other information, an identification of the mass storage data to be read.
  • the command packet is received by mass storage system 100 via a network bus device interface 108 and is transferred to staging memory 110 in the manner previously described.
  • System processor 107 reads the command packet stored in staging memory 110, and assigns one or more transaction identification numbers to the command. The number of transaction identification numbers used depends on the amount of data requested. System processor 107 then enters the transaction identification numbers into table 400, and notifies the appropriate mass storage device 102 to retrieve the data requested by the command packet.
  • the mass storage device 102 When the mass storage device 102 is ready to transfer the data associated with a particular transaction identification number, the mass storage device notifies its device interface 104 which in turn causes system processor 107 to be interrupted.
  • System processor 107 determines how many staging elements of staging memory 110 would be required to transfer the mass storage data associated with the transaction identification number and obtains the necessary number of staging elements from the SE_FREE_P00L list. For each staging element, the address in staging memory 110 at which transfer of the header/data segment is to begin is loaded into SNAKE/DESNAKE FIFO 504. The staging element addresses are also entered into table 400 in the order in which they are loaded into FIFO 504. System processor then selects an available
  • DMA channel 105 and initiates the operation of the state machine sequence circuit within the DMA control logic component 106 associated with the selected channel.
  • the operation of the state machine sequence circuit is initiated by system processor 107 by loading data transfer length counter 510 with a value equal to the total length of data to be transferred (state 620) .
  • the state machine sequence circuit then causes port control hardware 506 to generate any control signals that may be necessary to condition DMA channel 105 and port 110b of staging memory 110 for the DMA transfer (state 622) , and loads block length counter 508 with a value equal to the length of each header/data segment 517 to be transferred (state 624) .
  • the state machine sequence circuit of port control hardware 506 next causes the first staging memory address in the sequence storage circuit 504 to be loaded into address counter 509, which provides staging memory port 110b with staging element addresses for directing header/data bytes into staging memory 110 (state 624) .
  • a header/data segment 517 is then transferred from mass storage device interface port
  • block length counter 508 and data transfer length counter 510 are decremented by one.
  • the state machine sequence circuit of port control hardware 506 checks the data validation field appended to the end of the header/data segment to ensure that the header/data segment was not corrupted during the transfer (state 628) .
  • the data validation information is not necessarily stored in staging memory 110, but can be stripped from the header/data segment when checked by the state machine sequence circuit of port control hardware 506. If stripped, new validation information is appended when the header/data segment is later transferred out of staging memory 110. If the state machine sequence circuit of port control hardware 506 detects an error when the data validation information is checked, an interrupt is posted to the system processor 107.
  • the state machine sequence circuit directs the port control hardware to reload block length counter 508 with the header/data segment length value and to cause the next staging memory address from the SNAKE/DESNAKE FIFO 504 to be loaded into address counter 509 to begin the transfer of another header/data segment. This process is repeated until the last data byte of the data stream on DMA channel 105 is transferred.
  • block length counter 508 decrements from one to zero after the last data byte is transferred (state 630)
  • the state machine sequence circuit checks and strips the last data validation field (state 632) and returns to idle state 601.
  • DMA control logic 106 interrupts system processor 107 to tell the processor that the transfer of data associated with a transaction identification number has been completed.
  • System processor 107 verifies that the header fields of the header/data segments stored in staging memory 110 indicate that the correct mass storage data has been transferred.
  • System processor 107 then writes new header fields on the stored header/data segments to meet network addressing format requirements, and prepares to transfer the header/data segments to one of device interfaces 108 for transmission in packet form on bus 109.
  • FIG. 7 illustrates the transfer of data from staging memory 110 to a network bus device interface 108.
  • system processor 107 selects one of the two device interfaces 108 and programs the corresponding sequence storage circuit 702a or 702b (labeled TMT ADDR FIFO) with a series of staging element identifiers and enters the staging element addresses and lengths into a corresponding table 705a or 705b.
  • These identifiers correspond to individual staging elements of staging memory 110 that contain data to be transmitted to device interface 108. This sequence is obtained from an entry in table 400 generated during the transfer of data from mass storage device interfaces 104 to staging memory 110.
  • Each identifier preferably comprises the starting memory address of the rewritten header field stored in the corresponding staging element and a tag number TA.
  • system processor 107 After programming TMT ADDR FIFO 702a, system processor 107 directs the port control hardware 706 of DMA control logic 103 to access the first staging element identifier from TMT ADDR FIFO 702a and to transfer the packet stored in the corresponding staging element to device interface 108. System processor 107 is then free for other processes. DMA control logic 103 repeats the process for each identifier in TMT ADDR FIFO 702a. After each packet is transmitted to device interface 108, DMA control logic 103 loads a corresponding status identifier into FIFO circuit 704a (labeled TMT STATUS FIFO) .
  • the status identifier may be expanded to include, in addition to the status bits previously discussed in connection with RCV STATUS FIFO 206, counts of any unsuccessful attempts to transmit.
  • an interrupt to system processor 107 is generated to indicate that a packet has been transferred.
  • System processor 107 checks the status of the transfer of the first packet to device interface 108, and then looks for additional status identifiers. If the status indicates a successful transfer, the entry in table 705a corresponding to the tag number read from the RCV STATUS FIFO 704a is set to a null value.
  • system processor 107 After checking any additional status identifiers in TMT STATUS FIFO 702a, system processor 107 returns to other operations until interrupted again. It may be desired that data be transferred between staging memory 110 and mass storage device interfaces 104 in header/data segments having a different length than that of the header and data fields of the packets received from bus 109. It may also be that the lengths of the header and data fields of the packets and/or the lengths of header/data segments transferred between staging memory 110 and mass storage device interfaces 104 vary from one to another. In either case, the differences in length can be accommodated by defining the length of staging elements in staging memory 110 to be a variable parameter. In so doing, the variable length of individual staging elements must be taken into account when transferring data to and from staging memory 110.
  • FIG. 8 illustrates an alternative embodiment of the snaking/desnaking system of FIG. 5, in which staging element identifiers include a staging element length parameter that is loaded into FIFO 804 along with a corresponding staging memory element address.
  • An additional counter circuit 802 (labeled SE LENGTH CNTR) is provided, into which the staging element length value from FIFO 804 is loaded after the corresponding staging element address is loaded by the port control hardware 806 into the address counter 509.
  • the value of counter 802 is decremented once for each byte of the header/data segment 517 transferred to or from staging memory 110, and is used instead of the value of block length counter 508 to determine when port control hardware 506 is to fetch the next staging element address and length from FIFO 804.
  • Block length counter 508 still determines when port control hardware 506 is to insert data validation information ("CRC") "into the data stream on DMA channel 105, and padding is carried out in the same manner as previously described.
  • CRC data validation information
  • staging element length parameter as illustrated in FIG. 8 thus permits the length of the header/data fields of the data transferred between staging memory 110 and mass storage device interfaces 104 to be independent of the length of packets received by mass storage system 100.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)
  • Container, Conveyance, Adherence, Positioning, Of Wafer (AREA)
  • Reciprocating, Oscillating Or Vibrating Motors (AREA)
  • Bus Control (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method and apparatus for transferring data from one device interface to another device interface via elements of a staging memory and a direct memory access (DMA) channel.

Description

A METHOD AND APPARATUS FOR TRANSFERRING DATA THROUGH A STAGING MEMORY
Background Of The Invention The present invention relates to packet- oriented transfers of data and other information in a computer network. More particularly, the present invention relates to a method and apparatus for staging data in elements of a staging memory and for transferring data between a device interface and the elements of a staging memory via a direct memory access (DMA) channel.
Packet switching is a known system for transmitting information such as data, commands and responses over a shared bus of a computer system or network by placing the information in packets having a specified format and transmitting each packet as a composite whole. Long transmissions, such as transfers of large amounts of data, are broken up into separate packets to reduce the amount of time that the shared bus is continuously occupied by a single transmission. Each packet typically includes a header of control elements, such as address bits and packet identification bits arranged in predetermined fields, and may further include error control information.
One known packet-switching method, described in Strecker et al. United States Patent 4,777,595, requires that all packet transmissions occur between a named buffer in a transmitting node and a named buffer in a receiving node. The named buffers are in actual memory at each node. To write data from one node to another, the data is placed in packets each labeled in designated fields with the name of the destination buffer in the receiving node and an offset value. The offset value of the packet specifies the location in the receiving buffer, relative to the starting address of the buffer, where the first byte of data in that particular packet is to be stored. A transaction identifier unique to the group of packets also is transmitted in a separate field of each packet. The transaction identifier is used in the process of confirming transmission of the packets.
This packet-switching method has considerable drawbacks in that it requires a node to have a named destination buffer in actual memory for receiving packet transmissions, and further requires that the receiving node identify its named destination buffer to the transmitting node prior to a data transfer. It also has the drawback of requiring that the receiving node be responsive to the contents of the destination buffer name field of a transmitted data packet for directing the contents of the packet to the named buffer. These drawbacks are particularly evident if one attempts to impose them on a receiving node which comprises a resource shared by multiple computers in a network. For example, consider a mass storage system acting as a shared resource for several computers in a computer network. The mass storage system must often process data transfer requests from more than one computer concurrently, and the data involved in each of these transfers is often sufficiently large to require that it be divided among several packets for transmission over the network communication bus. Depending on the protocol of the communication bus and the relative priorities of the transfers, the mass storage system may receive packets associated with one data transfer between packets associated with another transfer.
Typically, the mass storage system has a memory through which data passes in transit between a network communication bus device interface and a mass storage device interface. This memory may also handle packets having control messages directed between a system processor of the mass storage system and other processors (e.g., remote processors on the network bus or other processors in the mass storage system) . The packets containing data or control messages are transferred between the memory and the device interface by one or more DMA channels. Such a DMA channel comprises a high-speed communications interface, including a data bus and control circuitry, for transferring data directly into or out of memory without requiring the attention of a system processor after initial set-up.
If the mass storage system, prior to receiving a data transmission from any one of the computers in the network, were required to allocate a named buffer space in memory to accept the entire data transfer (which may be many packets long) , the concurrent processing of several such data transfer requests would require that the mass storage system concurrently allocate a number of separate named buffer spaces equal to the number of concurrent transfers being processed. This pre-allocation of separate named buffers in the memory of the mass storage system ties up the memory, resulting in inefficient use of available memory and possibly limiting the data throughput of the mass storage system by restricting the number of data requests that can be processed concurrently. Greater efficiency (in terms of memory use) can be achieved by a more dynamic allocation of memory on a packet-by-packet basis, such that memory space for a particular incoming expected packet is not allocated until the packet is received by the mass storage system. Moreover, efficiency is improved by allowing packets to be stored at any available location in the memory. Such arbitrary, packet-by-packet allocation of memory is particularly suited to the memory of a mass storage system. Unlike transfers of data between actual memory of one computer and actual memory of another computer, transfers of data involving a mass storage system do not use the memory of the mass storage system as a final destination for the data. Rather, as described above, packets containing data are only passed through the memory in transit between the communication bus of the network and the mass storage device or devices of the system. Data comes and goes through the memory in two directions (i.e., into and out of mass storage) arbitrarily, depending on the demands of the computers in the network and the conditions (e.g. , busy or idle) of the communication bus, the mass storage devices and the data channels leading to the mass storage devices. As a consequence, the amount and specific locations of memory space used at any particular time, and conversely the amount and specific locations available to receive packets, continually varies. Particular memory locations arbitrarily cycle between available and unavailable states. In such circumstances, pre-allocation of named buffer spaces in memory is clearly and unnecessarily inefficient.
In view of the foregoing, it would be desirable instead to permit packets to be placed arbitrarily in available memory locations without regard to their source, contents or relationship to other packets — thus allowing the mass storage system to allocate memory locations based on immediate need and immediate availability (i.e., the memory is free to place an incoming packet in whatever memory location happens to be available when the packet is received by the system) . Likewise it would be desirable to permit data from the mass storage devices to be transferred to arbitrary locations in the memory in preparation for transmission over the network communication bus — again allowing the mass storage system to allocate memory locations based on immediate need and immediate availability. Of course, it would further be desirable to be able to retrieve data from arbitrary places in memory and to assemble the data in logical order either for transfer to mass storage or for transmission over the network communication bus.
Packet-switching networks are known in the art that do not require a receiving node to identify a named destination buffer prior to transferring a packet from memory to memory. These networks use various methods for directing the contents of packets into the receiving memory such as, for example, by maintaining a software-controlled address table in the memory of the receiving node, the entries of which are used to point to allocated memory locations unknown to the transmitting node. The present invention adopts the principle of such networks in that it is an object of the present invention to provide a method and apparatus for transferring packets between a network communication bus and memory, without allocating or identifying named buffers.
However, known computer systems typically transfer data into and out of contiguous locations in memory to minimize processor interrupts and simplify the transfer process. In known computer systems in which data is stored in disjoint memory locations, a separate processor interrupt is usually required to transfer each non-contiguous segment of data into and out of memory. The present invention is an improvemen over such systems in that with respect to the writing of data from memory to a device interface, non¬ contiguous segments of data stored in the memory are joined by DMA control logic to form a contiguous DMA data transfer to the device interface, and in that, with respect to the reading of data into memory from the device interface, a contiguous DMA data transfer from the device interface is routed by DMA control logic into selected not necessarily contiguous segment of memory in the staging memory. After initial set-up processor attention is not required in either case to transfer the individual data segments until the entire transfer is completed.
Summary Of The Invention These and other objects and advantages are accomplished by providing a staging memory logically divided into a plurality of addressable elements. Identifiers corresponding to available memory elements are arbitrarily selected by a microprocessor from a pool of such identifiers and are stored in a sequence storage circuit such as a FIFO storage circuit.
The present invention is described in the context of a mass storage system that includes a staging memory for transferring data between a networ bus device interface and a mass storage device interface. When a packet is to be received by the staging memory from a device interface connected to the network communication bus, an element identifier is accessed from the sequence storage circuit by DMA control, and the packet is stored in the corresponding location in the memory. The logic indicates that the memory element has received a packet, such as by placing a status word corresponding to the element in a storage register and by generating a control signal such as a processor interrupt signal. The packet is then checked by a system processor of the main storage system to determine if it contains data for mass storage. If the packet does not have data for storage, the system processor notifies other software that a non-data packet has been received. Otherwise, the system processor places information identifying the received packet in a look-up data table. Multiple packets of data can be received into the memory at high speed because the sequence storage circuit can be programmed prior to transfer with multiple element identifiers.
Data stored in the memory elements is transferred to mass storage by a snaking operation which requires only a single intervention by the system microprocessor. By "snaking" the inventors mean combining data from non-contiguous memory locations into a single data transmission. This is accomplished by programming a sequence storage circuit with a series of element identifiers corresponding to memory elements having data to be included in the single data transmission. Under the control of logic separate from and set up by the system processor, the data from the corresponding series of elements is read from the memory and assembled into a data stream of predetermined length for DMA transfer to a mass storage device interface in accordance with the programmed order of the element identifiers in the sequence storage circuit. The data stream comprises header fields, data fields and error correction fields. Any of these fields may exist in the memory, or may be generated by the DMA control logic as a result of instructions provided to the logic by the system microprocessor during set-up. In a preferred embodiment, for example, the control logic pads the last data field in the data stream if necessary to achieve proper block size as defined for transmissions between the memory and the device interface. In addition, any of these fields (e.g., the header fields) may be omitted, or other fields added, depending upon the nature of the data being transferred.
When data is to be read from mass storage, the data is transferred to the staging memory as a single contiguous DMA data stream from a mass storage device interface. The data stream is divided into segments which are stored in selected not necessarily contiguous memory elements of the staging memory in accordance with a series of element identifiers programmed into a sequence storage circuit by the system processor. This process is referred to herein as "desnaking." The element identifiers correspond to available memory elements and are arbitrarily selected by the system microprocessor from a pool of such identifiers. The data is stored under the control of logic separate from and set up by the system processor, such that system processor intervention is not required after initial set-up until the transfer is completed. The system processor keeps track of which memory elements have been programmed to receive which data segments, and when ready to do so sets up logic to retrieve data segments from the staging memory, assemble them into individual packets and provide them to the network bus device interface for transmission over the network communication bus.
Brief Description Of The Drawings
The above and other objects and advantages of the present invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
FIG. 1 is a block diagram of a mass storage system including a staging memory in accordance with the principles of the present invention; FIG. 2 is a block diagram of an embodiment of the present invention, including a staging memory and receive address and receive status FIFO's;
FIG. 3 is a diagram showing the format of a typical data packet of the type known in the prior art and suitable for use with the present invention;
FIG. 4 is a diagram of a data table provided in processor memory to identify memory elements of the staging memory of FIG. 1 that have received data packets from the network communication bus or header/data segments from a mass storage device interface;
FIG. 5 is a block diagram of an embodiment of the snaking/desnaking system of the present invention, including the staging memory of FIG. 1 and a snaking/desnaking FIFO; and
FIG. 6 is a flow diagram of the states of state machine sequence circuit 506 of FIG. 5 during execution of transfers of data between staging memory 110 and DMA channel 105 in accordance with the principles of the present invention;
FIG. 7 is a block diagram of an embodiment of the packet transmission system of the present invention, including a staging memory and transmit address and transmit status FIFO's for each network bus device interface; and
FIG. 8 is a block diagram of an alternative embodiment of the snaking system of the present invention.
Detailed Description Of The Invention
FIG. 1 shows a mass storage system 100 that includes one or more mass storage devices 102 (e.g., disk drives or disk drive arrays) and corresponding device interfaces 104 for communicating between devices 102 and other circuitry in mass storage system 100. Mass storage system 100 is connected to a network communication bus 109 via device interfaces 108. There is provided in mass storage system 100 a staging memory 110 for temporarily storing information during a data transfer between the mass storage devices 102 and a host computer attached to network communication bus 109. This staging memory is used, for example, to hold data received from one of either device interfaces 104 or 108 pending readiness of another device interface to receive the data. In the case of a data transfer from a host computer on network bus 109 to a mass storage device 103, the staging memory 110 receives the data from one of device interfaces 108 and holds the data until it is transferred to one of device interfaces 104. In the case of a data transfer from a mass storage device 102 to a host computer attached to network bus 109, the staging memory 110 receives the data from one of device interfaces 104 and holds the data until it is transferred to one of device interfaces 108. Similarly, staging memory 110 may also hold data that is transferred between device interfaces of like kind (e.g., a transfer from one of device interface 104 to another of device interface 104) . This same memory may be used for purposes of handling transfers of information other than mass storage data, such as command messages between a host computer connected to network bus 109 and mass storage system 100.
Data transfers in mass storage system 100 are controlled by system processor 107 through DMA control logic components 103 and 106. DMA control logic component 103 controls the transfer of data between device interfaces 108 and staging memory 110. DMA control logic components 106 control the transfer of data between device interfaces 104 and staging memory 110. In the embodiment of FIG. 1, two device interfaces 108 are shown connected to staging memory 110 through a 2:1 multiplexer 111, which in response to a control signal from DMA logic component 103 determines which of the two device interfaces 108 may communicate with staging memory 110. Each of device interfaces 108 includes a port 108a for interfacing with DMA bus 112. Likewise, staging memory 110 includes a port 110a for interfacing with DMA bus 112. DMA control logic component 103 provides control signals to each of ports 108a and 110a and multiplexer 111 to accomplish transfers of data on DMA bus 112. Each of device interfaces 104 has a port 104a for transmitting or receiving DMA data on either of two DMA buses 105 as determined by the setting of corresponding 2:1 multiplexers 104b. The set-up of each of multiplexers 104b is controlled by two DMA control logic components 106. Likewise staging memory 110 includes two ports 110b for communicating with a respective one of DMA buses 105. By providing two DMA busses 105 between device interfaces 104 and staging memory 110 each with a separate DMA control logic component 106, there can be two simultaneous DMA transfers between staging memory 110 and two different ones of device interfaces 104. Each of DMA control logic components 106 provides control signals to ports 104a and 110b and multiplexers 104b to accomplish data transfers on DMA bus 105. In addition to controlling DMA logic components 103 and 105, system processor 107 has direct access to staging memory 110 via port 110c. System processor 107 also has direct access to device interfaces 104 and 108. As described in greater detail below, DMA control logic component 103 serves the purpose of off¬ loading data transfer overhead from system processor 107 in connection with a data transfer between staging memory 110 and one of device interfaces 108 after an initial set-up of DMA control logic component 103 and device interface 108. Similarly, also as described in greater detail below, DMA control logic components 106 serve the purpose of off-loading data transfer overhead from system processor 107 in connection with a data transfer between staging memory 110 and one of device interfaces 104 after an initial set-up of the appropriate DMA logic components 106.
FIG. 2 shows a block diagram of an exemplary embodiment of the packet receiving system of the present invention implemented to receive data packets from device interfaces 108 into staging memory 110. Device interfaces 108 receive information over bus 109 in the form of packets, such as the data packet 300 shown in FIG. 3. The format of these packets typically is defined such that each packet has a known size usually indicated in the header field defined below, and includes three fields, including a packet header or identification field 300a, a data field 300b, and a field 300c for validation information (e.g., CRC information) . The actual format of the packets may vary depending on the information processing system in which the packet receiving system of the present invention is used. As will be described in greater detail below, the present invention is capable of accommodating variations in packet size. It is also to be appreciated that the format of data packet 300 may be used to transfer control or status information between a computer and the mass storage system, such that the data field of a packet received by device interface 108 may contain information other than mass storage data, such as control or status information. In packet 300 of FIG. 3, the type of data contained by field 300b (e.g., mass storage data, control or status information) is identified by the OPCODE portion of identification field 300a.
Various schemes are used in conventional information processing systems for referencing individual packets containing mass storage data. One such conventional scheme involves transaction-based packet transfers. Each transaction has a number by which it and each packet included in the transaction are referred to. Where a plurality of packets is included in a particular transaction, the order of the mass storage data in the packets is identified by an offset value equal to the number of words or bytes or other data in it by which the beginning of the mass storage data in each packet is offset from the beginning of the mass storage data in the first packet in the transaction. A transaction identification field 302 and an offset value field 304 are shown in data packet 300 as part of identification field 300a. The exemplary embodiment of the packet receiving system of the present invention described herein is discussed in the context of a network using this type of packet reference. As will be apparent to one of skill in the art, however, embodiments of the present invention can be practiced with other packet identification schemes. Moreover, as will also be apparent, the present invention can be practiced without regard to any particular destination buffer address that may be specified in the packet.
Generally, a transfer of packeted data over a conventional shared system or network bus, such as may be involved in writing data from the memory of a central processor to a mass storage system, is initiated by a command packet from a remote central processor to the mass storage system. For example, in a write transaction, the command packet typically requests the mass storage system to respond by providing a return packet including, among other information, a receiving address, a source address (provided to the mass storage system by the requesting computer) and a transaction identifier. Upon receipt of this response, the remote central processor places the data it seeks to transfer into packets, and places the receiving address and transaction identifier generated by the mass storage system into the corresponding fields of each data packet. The central processor also generates an offset value for each data packet, and typically transmits the data packets in the order of their offset value to the mass storage system.
Because of the multiplexing capability of a packet-switching system, these data packets may be received by the mass storage system interspersed among data packets associated with other transactions. In a typical conventional mass storage system, the data fro the received data packets would be placed in contiguou memory locations beginning at the receiving address identified in the address field of the packets plus an offset designated in each packet.
The staging memory 110 of the present invention is useful in a mass storage system to allow received data packets to be stored in memory at non¬ contiguous locations unknown even symbolically to the remote central processor. Staging memory 110 comprise an addressable memory circuit. The memory elements of staging memory 110 may be implemented using commercially available integrated circuit RAM devices (e.g., devices such as Motorola's 6293 RAM chip). Commercially available register devices also may be used to provide ports 110a, 110b, and 110c. Each port preferably comprises a data latch register, an address counter and a read/write enable register, and may include other logic as may be desired to implement the port functions. Since the purpose of staging memory 110 is to stage network packets, the memory is logically divided by system processor 107 into a plurality of "staging elements" 200, each of which can be described by an address and a length. In this embodiment, all staging elements are of equal length, that length being the maximum expected packet length. This logical division is accomplished by system processor 107 before mass storage system 100 enters an on-line state. System processor 107 divides the size of the staging memory 110 by the maximum expected packet length to determine the number of staging elements 200, and creates a list SE_FREE_POOL in its memory of the starting addresses of each staging element 200. When a remote central processor initiates a write operation to mass storage system 100, system processor 107 generates and returns to the central processor, as previously described, a packet including a transaction identifier. System processor 107 also places the generated transaction identifier into a memory-resident table for subsequent use, as described hereafter, in completing outstanding transactions after data is placed in staging memory 110. An example of such a table, described in greater detail below, is shown in FIG. 4.
Prior to a packet transfer transaction, system processor 107 programs a sequence storage circuit 202 in DMA control logic 103 with a series of staging element identifiers. These identifiers correspond to individual staging elements of staging memory 110 which are available to receive packets. They are selected by system processor 107 from the available or currently unused staging elements identified on the SE_FREE_POOL list, and are individually accessed by port control hardware 203 to store data packets received by device interfaces 108 into the corresponding staging elements of staging memory 110. Port control hardware 203 comprises logic, which may be conventionally implemented, such as by using discrete logic or programmable array logic, to manipulate the control, address, and data registers of ports 108a and 110a, and to control multiplexer 111, as required by the particular implementation of these circuits for transferring data between device interfaces 108 and staging memory 110.
In the embodiment of FIG. 2, the sequence storage circuit 202 is implemented using a conventional FIFO (first in first out) storage circuit (labeled RCV ADDR FIFO) in which staging element identifiers stored in the circuit are accessed in the same sequence that they are loaded by system processor 107. The sequence in which the programmed identifiers are accessed by port control hardware 203 can be in a different order if desired (e.g., the identifiers can be accessed in reverse order, such as by using a LIFO circuit — last in first out) . In addition, the sequence storage circuit can be implemented by circuitry other than a FIFO or LIFO circuit, such as by using RAM or register arrays, or a microprocessor,
In a preferred embodiment of the present invention, each staging element identifier includes the starting address SE in staging memory 110 of the corresponding staging element. As each address SE is loaded by system processor 107 into RCV ADDR FIFO 202, a short "tag number" RT is joined to the address, and this tag number and the corresponding starting address and length of each staging element loaded into circuit 202 is placed by system processor 107 into a reference table 204. The purpose of the tag number is to provide a short hand reference to each starting address SE loaded into RCV ADDR FIFO 202 for use in generating status words in RCV STATUS FIFO 206. By using the tag number instead of the actual starting address of the staging element in RCV STATUS FIFO 206, the necessary bit-width of FIFO 206 is kept small. The generation of the status words in RCV status FIFO 206 is described below.
Tag numbers are loaded into RCV ADDR FIFO 202 in consecutive order, although another order may be used, as long as the order of tag numbers in RCV ADDR FIFO 202 is reflected by reference table 204. Preferably the tag numbers have values from 0 to (m-1) , where m is a parameter variable equal to the depth, or a portion thereof, of RCV ADDR FIFO 202 (i.e., the number of staging element identifiers that can be loaded into RCV ADDR FIFO 202 at one time) . For example, if a FIFO circuit having a depth of 16 or more staging element identifiers is used, tag number T may comprise a 4-bit binary number having a value of 0-15. The first staging element address loaded into RCV ADDR FIFO 202 might be assigned a tag number of 0, in which case the second will be assigned 1, etc. In this case, the tag number acts as a modulo-16 counter, such that the next tag number used after 15 would be 0.
System processor 107 reloads RCV ADDR FIFO 202 with starting addresses of currently available staging elements from the SE_FREE_P00L list as the initially loaded addresses are used by port control hardware 203 to receive data packets arriving at device interfaces 108 from bus 109. System processor 107 updates reference table 204 as the system processor reloads RCV ADDR FIFO 202. Preferably, the initial loading of staging element identifiers in RCV ADDR FIFO 202 is done when the mass storage system is initialized. Of course, when a staging element receives a packet, it becomes unavailable until such time as that packet is transferred from the staging element to a mass storage device interface or is otherwise processed, at which time the staging element returns to an available state and is returned to the SE-FREE-POOL list. Thus, during the course of operation of staging memory 110 individual staging elements will cycle between available and unavailable states at various times. System processor 107 keeps track of this cycling process using the SE-FREE-POOL list in order to know which staging elements are available at any given time to load into RCV ADDR FIFO 202. In the preferred embodiment, device interface 108 checks and strips the CRC information (e.g., validation field 300c) from packets that it receives from bus 109, such that a data segment comprising the header and data fields from each packet received is stored in staging memory 110. After the data segment from each data packet is received by staging memory 110, port control hardware 203 loads a corresponding status identifier into FIFO circuit 206 (RCV STATUS FIFO) to indicate completion of the packet transfer.
The status identifier includes a group of STAT bits and the tag number that was assigned in RCV ADDR FIFO 202 to the staging element which received the packet. STAT bits may include, for example, an error bit that indicates whether or not a transmission error was detected by the DMA control logic 103 and a bit indicating which of device interfaces 108 received the packet from bus 109. As with the RCV ADDR FIFO 202, RCV STATUS FIFO 206 can be implemented using conventional circuitry other than a FIFO circuit.
Upon transition of RCV STATUS FIFO 206 from an empty to a non-empty state, an interrupt is generated to system processor 107 to indicate that a packet has been received. In response to the interrupt, system processor 107 reads the tag number of the first status identifier in RCV STATUS FIFO 206 and determines the starting address and length of the corresponding staging element from table 204 (it may not be necessary to list the lengths of the staging elements in table 204 if they are all equal, in which case the length may be stored in a single memory location or register which is read by system processor 107) . System processor 107 then places the starting address, length and offset of the packet into table 400 as shown in FIG. 4. The staging element identifier entry in table 204 corresponding to the tag number read from the RCV STATUS FIFO 206 is set to a null value to indicate that there is no longer a valid tag by that number in the DMA control logic 103. Table 400 is indexed according to the transaction identifiers of outstanding transactions, such that for a given transaction identifier, the starting addresses of staging elements having received data packets associated with that transaction are listed by system processor 107 in the order in which the packets of that transaction were received by a device interface 108 or in the order of their offset. Table 400 is used by system processor 107 to complete the transfer of data from staging memory 110 to mass storage device interfaces 104, as described in connection with FIG. 5. It may be desired that new control information such as logical block address and mass storage device number, for internal use by the mass storage system in completing the transfer to mass storage, be stored in a staging element with the data packet. This can be accomplished simply by having system processor 107 write the new control information over selected portions of the original control elements contained in the header field of the packet after the packet has been placed in staging memory 110.
Alternatively, such new control information can be added to the packet data field by DMA control logic 106 as the data fields are transferred from staging memory 110 to mass storage device interface 104. After system processor 107 accesses the first status identifier in RCV STATUS FIFO 206 in response to an interrupt and places the address of the associated staging element into table 400, system processor 107 checks RCV STATUS FIFO 206 for additional status identifiers, and repeats the accessing process for each such identifier. If there are no more identifiers in RCV STATUS FIFO 206, system processor 107 returns to other operations until interrupted again.
Where a packet arriving into device interface 108 is other than a mass storage data packet, such as a command packet or other type of message, the packet is identified by system processor 107 as being something other than mass storage data. The packet is received into staging memory 110 in the same manner as a mass storage data packet except that system processor 107 does not place the corresponding staging element address into table 400. Instead, system processor 107 provides the staging element address containing the packet to other software in the control circuitry of the mass storage system, which in turn processes the packet and ultimately returns the staging element which contained the packet to the SE-FREE-POOL.
When this system processor detects that all mass storage data packets for a particular write transaction have been received from bus 109, it prepares to transfer the mass storage data to one of mass storage device interfaces 104.
FIG. 5 illustrates an exemplary embodiment of a "snaking/desnaking" mechanism for transferring data between staging memory 110 and a DMA channel 105 connected to mass storage device interfaces 104. The present invention concerns data transfers in both directions over DMA channel 105. The term "snaking" has been previously described herein. First will be described a method for snaking together data stored in selected staging elements of staging memory 110 to transmit the data as a single contiguous DMA data transfer to one of mass storage device interfaces 104. For purposes of explanation, it is assumed that several packets of mass storage data associated with a single data transfer transaction have been transmitted by a computer to the mass storage system, and have been stored in various staging elements 200 of staging memory 110 in accordance with the packet receiving aspect of the present invention described in connection with FIG. 2. The stored mass storage data packets are of equal length, with the possible exception of the last data segment associated with the transaction, which may have only a fractional amount of mass storage data. Each stored mass storage data packet is modified by system processor 107 to include a header comprising control and addressing information for use in directing the corresponding mass storage data to a particular logical or physical location in mass storage (as previously stated, this information may be written over the control information originally included in the header field of the packet) . It is also assumed that system processor 107 has knowledge of the starting memory addresses, lengths and offset values of the data segments to be snaked together. This can be accomplished, for example, by creating a look-up data table like that shown in FIG. 4 when the data is stored in staging memory 110, in the manner previously described. To transfer the data segments from selected staging elements 200 of staging memory 110 to DMA channel 105, system processor 107 programs sequence storage circuit 504 of DMA control logic 106 with a series of memory addresses ("SE ADDRESS") corresponding to the starting addresses in memory of the modified header fields contained in each of the selected staging elements. Sequence storage circuit 504 is preferably implemented using a FIFO storage circuit (labeled "SNAKE/DESNAKE FIFO") in which staging memory addresses are programmed and accessed in accordance with the offset values of the data segments contained in the corresponding staging elements. The sequence in which the staging memory addresses are programmed (and thus the sequence in which the contents of corresponding staging elements are transferred) can be varied as desired. Sequence storage circuit 504 may be implemented by circuitry other than a FIFO circuit, such as by using a LIFO circuit, RAM or register arrays, or a microprocessor. After programming sequence storage circuit
504, system processor 107 loads data transfer length counter 510 with a value equal to the total length of data to be transferred. This loading of data transfer length counter 510 initiates the operation of port control hardware 506. Port control hardware 506 comprises a state machine sequence circuit and other logic, which may be conventionally implemented, such as by using discrete logic or programmable array logic, to manipulate the control address and data registers of parts 104a and 110b, and to control multiplexer 104b, and may be constructed in any conventional manner to perform the DMA transfer without requiring further attention from system processor 107. A flow diagram 600 illustrating the states of the state sequence circuit of port control hardware 506 as it executes DMA transfers between staging memory 110 and DMA channel 105 is shown in FIG. 6.
The states involved in a transfer from staging memory 110 to device interface 104 are shown in the lower portion of FIG. 6, and are generally referred to herein as read sequence 600a. The state machine sequence circuit of port control hardware 506 begins read sequence 600a from an idle state 601 when state machine sequence circuit 506 is initiated by system processor 107 with the loading of data transfer length counter 510. The state machine sequence circuit first loads block length counter 508 with a value equal to the length of each header/data segment (e.g., 520 bytes) in staging memory 110 (excepting fractional data segments) (state 602) . The state machine sequence circuit next causes the port control hardware to generate any control signals that may be necessary to set up DMA channel 105, mass storage device interface port 104a and staging memory port 110b for the DMA transfer (state 604) .
The state machine sequence circuit of port control hardware 506 then assembles the selected data segments into a single data stream 512 which is transferred over DMA channel 105 to mass storage device interface 104. This may be accomplished as follows. The state machine sequence circuit causes the first staging memory address in sequence storage circuit 504 to be loaded into address counter 509, which provides staging memory port 110b with staging element addresses for directing header/data bytes out of staging memory 110 (state 606). Header 514 and data field 516, comprising a header/data segment 517, are then transferred from the addressed staging element to DMA channel 105. After each byte is transferred to DMA channel
105, block length counter 508 and data transfer length counter 510 are each decremented by one. Although transfers between staging memory 110 and DMA channel 105 are described herein as taking place one byte at a time, such that block length counter 508 and data transfer length counter 510 are decremented on a byte- by-byte basis, the ports 110b and 104a and DMA channel 105 may be implemented to transfer larger amounts of data in parallel (e.g., longwords) . In such case, counters 508 and 510 may be implemented to count longwords or other units rather than bytes. When block length counter 508 reaches zero, indicating that a full header/data segment 517 of 520 bytes has been transferred to DMA channel 105, the state machine sequence circuit directs port control hardware 506 to reload block length counter 508 with the header/data segment length value and to cause the next staging memory address to be loaded into address counter 509 from sequence storage circuit 504 to begin the transfer of another header/data segment (state 608) . Before transfer of this next header/data segment begins, the state machine sequence circuit of port control hardware 506 causes data validation information ("CRC" 518) to be appended to data field 516 of the first segment in DMA data stream 512 (state 610) . This process (states 606, 608, 610) is repeated until data transfer length counter 510 equals one. If block length counter 508 equals one when data transfer counter 510 reaches one, the last byte of data is transferred and each counter is decremented to zero (state 612) . A data validation field is then appended to the just transferred data field (state 614) and the state machine sequence circuit 506 returns to the idle state 601. If block length counter 508 is not equal to one when data transfer length counter 510 equals one (i.e., when the last data byte stored in staging memory 110 is being transferred) , block length counter 508 will have a non¬ zero value after the last stored data byte has been transferred and counters 508 and 510 have been decremented. To complete the last data field of DMA data stream 512 the state machine sequence circuit causes port control hardware 506 to continue transferring bytes of "pad" data on bus 105 as part of the data stream (state 616) . This "pad" data comprises a repeating value known as the pad byte. Pad bytes are transferred until the length of the last transmitted header/data segment is equal to the lengths of the previous header/data segments. This is accomplished by decrementing the block length counter after each pad byte is transmitted, and by continuing the padding operation until the block length counter reaches zero. After the last pad byte is transferred and the block length counter is decremented to zero (state 618) , a data validation field is transmitted (state 614) to complete the DMA data stream from staging memory 110 to device interface 104.
With respect to a data transfer in which a remote central processor on network bus 109 seeks to retrieve data from mass storage 102 (i.e., a read mass storage data operation) , mechanisms similar to those described above can be used to route header/data segments 517 from a single contiguous DMA data stream 512 on DMA channel 105 into available staging elements of staging memory 110, and to then transfer the header/data segments in packet form from staging memory 110 to network communication bus 109 via device interface 108.
The read operation is initiated by a command packet from the remote central processor that provides, among other information, an identification of the mass storage data to be read. The command packet is received by mass storage system 100 via a network bus device interface 108 and is transferred to staging memory 110 in the manner previously described. System processor 107 reads the command packet stored in staging memory 110, and assigns one or more transaction identification numbers to the command. The number of transaction identification numbers used depends on the amount of data requested. System processor 107 then enters the transaction identification numbers into table 400, and notifies the appropriate mass storage device 102 to retrieve the data requested by the command packet.
When the mass storage device 102 is ready to transfer the data associated with a particular transaction identification number, the mass storage device notifies its device interface 104 which in turn causes system processor 107 to be interrupted. System processor 107 determines how many staging elements of staging memory 110 would be required to transfer the mass storage data associated with the transaction identification number and obtains the necessary number of staging elements from the SE_FREE_P00L list. For each staging element, the address in staging memory 110 at which transfer of the header/data segment is to begin is loaded into SNAKE/DESNAKE FIFO 504. The staging element addresses are also entered into table 400 in the order in which they are loaded into FIFO 504. System processor then selects an available
DMA channel 105, and initiates the operation of the state machine sequence circuit within the DMA control logic component 106 associated with the selected channel. Referring now to the states of write (to staging memory 110) sequence 600b, the operation of the state machine sequence circuit is initiated by system processor 107 by loading data transfer length counter 510 with a value equal to the total length of data to be transferred (state 620) . The state machine sequence circuit then causes port control hardware 506 to generate any control signals that may be necessary to condition DMA channel 105 and port 110b of staging memory 110 for the DMA transfer (state 622) , and loads block length counter 508 with a value equal to the length of each header/data segment 517 to be transferred (state 624) .
The state machine sequence circuit of port control hardware 506 next causes the first staging memory address in the sequence storage circuit 504 to be loaded into address counter 509, which provides staging memory port 110b with staging element addresses for directing header/data bytes into staging memory 110 (state 624) . A header/data segment 517 is then transferred from mass storage device interface port
104a to the addressed staging element. After each byte is transferred to the staging element, block length counter 508 and data transfer length counter 510 are decremented by one. When a full header/data segment 517 has been transferred to staging memory 110 (state 626) , as indicated by block length counter 508 being decremented from one to zero, the state machine sequence circuit of port control hardware 506 checks the data validation field appended to the end of the header/data segment to ensure that the header/data segment was not corrupted during the transfer (state 628) . The data validation information is not necessarily stored in staging memory 110, but can be stripped from the header/data segment when checked by the state machine sequence circuit of port control hardware 506. If stripped, new validation information is appended when the header/data segment is later transferred out of staging memory 110. If the state machine sequence circuit of port control hardware 506 detects an error when the data validation information is checked, an interrupt is posted to the system processor 107.
After the data validation information is checked and it is determined that the header/data segment is valid, the state machine sequence circuit directs the port control hardware to reload block length counter 508 with the header/data segment length value and to cause the next staging memory address from the SNAKE/DESNAKE FIFO 504 to be loaded into address counter 509 to begin the transfer of another header/data segment. This process is repeated until the last data byte of the data stream on DMA channel 105 is transferred. When block length counter 508 decrements from one to zero after the last data byte is transferred (state 630) , the state machine sequence circuit checks and strips the last data validation field (state 632) and returns to idle state 601.
After the last byte of data is transferred to staging memory 110, DMA control logic 106 interrupts system processor 107 to tell the processor that the transfer of data associated with a transaction identification number has been completed. System processor 107 verifies that the header fields of the header/data segments stored in staging memory 110 indicate that the correct mass storage data has been transferred. System processor 107 then writes new header fields on the stored header/data segments to meet network addressing format requirements, and prepares to transfer the header/data segments to one of device interfaces 108 for transmission in packet form on bus 109.
FIG. 7 illustrates the transfer of data from staging memory 110 to a network bus device interface 108. Prior to transfer, system processor 107 selects one of the two device interfaces 108 and programs the corresponding sequence storage circuit 702a or 702b (labeled TMT ADDR FIFO) with a series of staging element identifiers and enters the staging element addresses and lengths into a corresponding table 705a or 705b. These identifiers correspond to individual staging elements of staging memory 110 that contain data to be transmitted to device interface 108. This sequence is obtained from an entry in table 400 generated during the transfer of data from mass storage device interfaces 104 to staging memory 110. For purposes of illustration, it is assumed hereafter that the device interface for a cable A has been selected by system processor 107. Each identifier preferably comprises the starting memory address of the rewritten header field stored in the corresponding staging element and a tag number TA.
After programming TMT ADDR FIFO 702a, system processor 107 directs the port control hardware 706 of DMA control logic 103 to access the first staging element identifier from TMT ADDR FIFO 702a and to transfer the packet stored in the corresponding staging element to device interface 108. System processor 107 is then free for other processes. DMA control logic 103 repeats the process for each identifier in TMT ADDR FIFO 702a. After each packet is transmitted to device interface 108, DMA control logic 103 loads a corresponding status identifier into FIFO circuit 704a (labeled TMT STATUS FIFO) . Here, the status identifier may be expanded to include, in addition to the status bits previously discussed in connection with RCV STATUS FIFO 206, counts of any unsuccessful attempts to transmit. Upon transition of TMT STATUS FIFO 704a from an empty state to a non-empty state, an interrupt to system processor 107 is generated to indicate that a packet has been transferred. System processor 107 checks the status of the transfer of the first packet to device interface 108, and then looks for additional status identifiers. If the status indicates a successful transfer, the entry in table 705a corresponding to the tag number read from the RCV STATUS FIFO 704a is set to a null value. After checking any additional status identifiers in TMT STATUS FIFO 702a, system processor 107 returns to other operations until interrupted again. It may be desired that data be transferred between staging memory 110 and mass storage device interfaces 104 in header/data segments having a different length than that of the header and data fields of the packets received from bus 109. It may also be that the lengths of the header and data fields of the packets and/or the lengths of header/data segments transferred between staging memory 110 and mass storage device interfaces 104 vary from one to another. In either case, the differences in length can be accommodated by defining the length of staging elements in staging memory 110 to be a variable parameter. In so doing, the variable length of individual staging elements must be taken into account when transferring data to and from staging memory 110. For example, FIG. 8 illustrates an alternative embodiment of the snaking/desnaking system of FIG. 5, in which staging element identifiers include a staging element length parameter that is loaded into FIFO 804 along with a corresponding staging memory element address. An additional counter circuit 802 (labeled SE LENGTH CNTR) is provided, into which the staging element length value from FIFO 804 is loaded after the corresponding staging element address is loaded by the port control hardware 806 into the address counter 509. The value of counter 802 is decremented once for each byte of the header/data segment 517 transferred to or from staging memory 110, and is used instead of the value of block length counter 508 to determine when port control hardware 506 is to fetch the next staging element address and length from FIFO 804. Block length counter 508 still determines when port control hardware 506 is to insert data validation information ("CRC") "into the data stream on DMA channel 105, and padding is carried out in the same manner as previously described.
The use of a staging element length parameter as illustrated in FIG. 8 thus permits the length of the header/data fields of the data transferred between staging memory 110 and mass storage device interfaces 104 to be independent of the length of packets received by mass storage system 100.
Thus a novel method and apparatus for transferring data through a staging memory has been described. One skilled in the art will appreciate that the present invention can be practiced by other than the described embodiments, and in particular may be incorporated in circuits other than the described mass storage system. The described embodiment is presented for purposes of illustration and not of limitation, and the present invention is limited only by the claims which follow.

Claims

WHAT IS CLAIMED IS:
1. A method for staging data in memory, the method comprising the steps of: defining in the memory a plurality of staging elements available for storing data; selecting from the plurality of available staging elements a series of staging elements; transferring data into the selected series of staging elements from a data channel; tabulating identification information identifiers corresponding to each of the staging elements which received data, whereby the tabulation provides an indication of the logical order of the stored data; and in accordance with the logical order indicated by the tabulation, transferring the data from the selected series of elements of the memory to a data channel.
2. The method of claim 1, wherein each staging element comprises a group of contiguous physical memory locations characterized by a unique starting memory address and a length.
3. The method of claim 1, wherein prior to the transfer of data from the memory to the data channel, a block length value equal to a number of data units in each memory element to be transferred is generated and a data transfer length value equal to the total number of data units in the series of memory elements is generated; and wherein the block length value and the data transfer length value are decremented as each data unit is transferred to the data channel.
4. The method of claim 3, wherein the block length value is regenerated each time the block length value decrements to zero and the data transfer length value has a non-zero value.
5. The method of claim 4, further comprising the step of transmitting a padding sequence on the data channel until the block length value decrements to zero if the data transfer length value decrements to zero and the block length value has a non-zero value.
6. The method of claim 1, further comprising the step of transmitting data validation information on the data channel after data is transferred from a memory element.
7. The method of claim 1, further comprising the step of verifying data validation information appended to data to be transferred into a memory element.
8. The method of claim 1, wherein the data transfer from the memory to the data channel comprises a transmission of a repeating sequence of a header, a data field and data validation check information.
9. An apparatus for staging data, the apparatus comprising: a staging memory; means for defining in the staging memory a plurality of staging elements• available for storing data; means for selecting from the plurality of available staging elements a series of staging elements; means for transferring data into the selected series of staging elements from a data channel; means for tabulating identification information identifiers corresponding to each of the staging elements which received data, whereby the tabulation provides an indication of the logical order of the stored data; and means for, in accordance with the logical order indicated by the tabulation, transferring the data from the selected series of elements of the memory to a data channel.
10. The method of claim 9, wherein staging elements are selected without regard to the nature of the data to be stored in the selected memory elements.
11. The apparatus of claim 9, wherein each staging element comprises a group of contiguous physical memory locations characterized by a unique starting memory address and a length.
12. The apparatus of claim 9, wherein prior to the transfer of data from the staging memory to the data channel, a block length value equal to a number of data units in each staging element to be transferred is generated and a data transfer length value equal to the total number of data units in the series of staging elements is generated; and wherein the block length value and the data transfer length value are decremented as each data unit is transferred to the data channel.
13. The apparatus of claim 12, wherein the block length value is regenerated each time the block length value decrements to zero and the data transfer length value has a non-zero value.
14. The apparatus of claim 12, further comprising means for transmitting a padding sequence on the data channel until the block length value decrements to zero if the data transfer length value decrements to zero and the block length value has a non-zero value.
15. The apparatus of claim 9, means for transmitting data validation information on the data channel after data is transferred from a staging element.
16. The apparatus of claim 9, further comprises means for verifying data validation information appended to data to be transferred into a staging element.
17. The apparatus of claim 9, wherein the data transfer from the memory to the data channel comprises a transmission of a repeating sequence of a header, a data field and data validation check information.
18. In a mass storage system for connection to a communication bus, an apparatus for transferring data between a mass storage device interface and a communication bus device interface, the apparatus comprising: a device interface for connecting the mass storage system to the communication bus; a staging memory having a plurality of defined staging elements randomly available for storin data; means for selecting from the plurality of available staging elements a series of staging elements; means for storing data in the selected series of staging elements; means for maintaining a data table comprising an identifier corresponding to each of the staging elements which received data, whereby the data table provides an indication of the logical order of the stored data; means for loading a sequence storage circuit with the address information for each staging element containing data in accordance with the indicated logical order of the stored data; and means for transferring the data in its logical order to a data channel, whereby the data is transferred to a mass storage or communication bus device interface.
19. The apparatus of claim 18, wherein the means for selecting a series of staging elements comprises a storage circuit which is loaded under microprocessor control with address information for each of the staging elements in the series of staging elements.
20. The apparatus of claim 18, wherein the means for storing the data in the selected series of staging elements comprises control means for accessing staging element address information from the storage circuit when a device interface receives data and for transferring the received data to the staging memory beginning at a starting memory address determined by the accessed address information.
21. The apparatus of claim 18, wherein the sequence storage circuit comprises a FIFO circuit.
22. The apparatus of claim 21, wherein the means for transferring data to a data channel comprises a state machine sequence circuit.
23. The apparatus of claim 22, wherein the state machine sequence circuit transfers data by accessing address information from the FIFO circuit and by using the address information to transfer the data from the staging memory to the data channel.
PCT/US1991/001251 1990-02-28 1991-02-27 A method and apparatus for transferring data through a staging memory WO1991013397A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48653590A 1990-02-28 1990-02-28
US486,535 1990-02-28

Publications (1)

Publication Number Publication Date
WO1991013397A1 true WO1991013397A1 (en) 1991-09-05

Family

ID=23932273

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/001251 WO1991013397A1 (en) 1990-02-28 1991-02-27 A method and apparatus for transferring data through a staging memory

Country Status (5)

Country Link
EP (1) EP0517808A1 (en)
JP (1) JP2989665B2 (en)
AU (1) AU7497091A (en)
CA (1) CA2076533A1 (en)
WO (1) WO1991013397A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0528273A2 (en) * 1991-08-16 1993-02-24 Fujitsu Limited Buffer memory and method of managing the same
EP0899652A2 (en) * 1997-07-31 1999-03-03 Matsushita Electric Industrial Co., Ltd. Communication device, communication method and medium on which computer program for carrying out the method is recorded
CN103199952A (en) * 2012-01-06 2013-07-10 上海华虹集成电路有限责任公司 Service data transmission method in communication receiver and service data transmission module

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3824551A (en) * 1972-05-18 1974-07-16 Little Inc A Releasable buffer memory for data processor
WO1984000835A1 (en) * 1982-08-13 1984-03-01 Western Electric Co First-in, first-out (fifo) memory configuration for queue storage
EP0153877A2 (en) * 1984-02-29 1985-09-04 Fujitsu Limited Image data buffering circuitry
US4864495A (en) * 1986-07-30 1989-09-05 Kabushiki Kaisha Toshiba Apparatus for controlling vacant areas in buffer memory in a pocket transmission system
EP0354073A1 (en) * 1988-07-01 1990-02-07 Electronique Serge Dassault Random access memory management system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3824551A (en) * 1972-05-18 1974-07-16 Little Inc A Releasable buffer memory for data processor
WO1984000835A1 (en) * 1982-08-13 1984-03-01 Western Electric Co First-in, first-out (fifo) memory configuration for queue storage
EP0153877A2 (en) * 1984-02-29 1985-09-04 Fujitsu Limited Image data buffering circuitry
US4864495A (en) * 1986-07-30 1989-09-05 Kabushiki Kaisha Toshiba Apparatus for controlling vacant areas in buffer memory in a pocket transmission system
EP0354073A1 (en) * 1988-07-01 1990-02-07 Electronique Serge Dassault Random access memory management system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0528273A2 (en) * 1991-08-16 1993-02-24 Fujitsu Limited Buffer memory and method of managing the same
EP0528273A3 (en) * 1991-08-16 1995-02-08 Fujitsu Ltd
US5539897A (en) * 1991-08-16 1996-07-23 Fujitsu Limited Buffer memory management with register list to provide an ordered list of buffer memory addresses into which the latest series of data blocks is written
EP0899652A2 (en) * 1997-07-31 1999-03-03 Matsushita Electric Industrial Co., Ltd. Communication device, communication method and medium on which computer program for carrying out the method is recorded
EP0899652A3 (en) * 1997-07-31 1999-11-17 Matsushita Electric Industrial Co., Ltd. Communication device, communication method and medium on which computer program for carrying out the method is recorded
US6223261B1 (en) 1997-07-31 2001-04-24 Matsushita Electric Industrial Co., Ltd. Communication system method and recording apparatus for performing arbitrary application processing
CN103199952A (en) * 2012-01-06 2013-07-10 上海华虹集成电路有限责任公司 Service data transmission method in communication receiver and service data transmission module

Also Published As

Publication number Publication date
CA2076533A1 (en) 1991-08-29
AU7497091A (en) 1991-09-18
JP2989665B2 (en) 1999-12-13
EP0517808A1 (en) 1992-12-16
JPH05505049A (en) 1993-07-29

Similar Documents

Publication Publication Date Title
US5315708A (en) Method and apparatus for transferring data through a staging memory
US5630059A (en) Expedited message transfer in a multi-nodal data processing system
US6408341B1 (en) Multi-tasking adapter for parallel network applications
US5758075A (en) Multimedia communication apparatus and methods
US5133062A (en) RAM buffer controller for providing simulated first-in-first-out (FIFO) buffers in a random access memory
US5187780A (en) Dual-path computer interconnect system with zone manager for packet memory
US5193149A (en) Dual-path computer interconnect system with four-ported packet memory control
EP0365731B1 (en) Method and apparatus for transferring messages between source and destination users through a shared memory
EP0241129B1 (en) Addressing arrangement for a RAM buffer controller
KR100268565B1 (en) System and method for queuing of tasks in a multiprocessing system
US4860244A (en) Buffer system for input/output portion of digital data processing system
US5020020A (en) Computer interconnect system with transmit-abort function
JP3863912B2 (en) Automatic start device for data transmission
US5151895A (en) Terminal server architecture
US5664145A (en) Apparatus and method for transferring data in a data storage subsystems wherein a multi-sector data transfer order is executed while a subsequent order is issued
US5752076A (en) Dynamic programming of bus master channels by intelligent peripheral devices using communication packets
US5867731A (en) System for data transfer across asynchronous interface
US5802546A (en) Status handling for transfer of data blocks between a local side and a host side
US20050152274A1 (en) Efficient command delivery and data transfer
JPH03130863A (en) Control-element transfer system
KR980013142A (en) Asynchronous Transfer Mode Communication Network, System Processing Performance and Memory Usage Enhancement Method
US5151999A (en) Serial communications controller for transfer of successive data frames with storage of supplemental data and word counts
US5901291A (en) Method and apparatus for maintaining message order in multi-user FIFO stacks
US5555390A (en) Data storage method and subsystem including a device controller for respecifying an amended start address
US5347514A (en) Processor-based smart packet memory interface

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH DE DK ES FI GB HU JP KP KR LK LU MC MG MW NL NO PL RO SD SE SU

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BF BJ CF CG CH CM DE DK ES FR GA GB GR IT LU ML MR NL SE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2076533

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1991905714

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1991905714

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 1991905714

Country of ref document: EP