US20120039341A1 - Method and apparatus for transferring data between ip network devices and scsi and fibre channel devices over an ip network - Google Patents

Method and apparatus for transferring data between ip network devices and scsi and fibre channel devices over an ip network Download PDF

Info

Publication number
US20120039341A1
US20120039341A1 US13/284,309 US201113284309A US2012039341A1 US 20120039341 A1 US20120039341 A1 US 20120039341A1 US 201113284309 A US201113284309 A US 201113284309A US 2012039341 A1 US2012039341 A1 US 2012039341A1
Authority
US
United States
Prior art keywords
address
fibre channel
network
scsi
soip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/284,309
Inventor
Aamer Latif
Rodney N. Mullendore
Joseph L. White
Brian Y. Uchino
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/284,309 priority Critical patent/US20120039341A1/en
Publication of US20120039341A1 publication Critical patent/US20120039341A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/06Answer-back mechanisms or circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4604LAN interconnection over a backbone network, e.g. Internet, Frame Relay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/36Flow control; Congestion control by determining packet size, e.g. maximum transfer unit [MTU]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/08Protocols for interworking; Protocol conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/169Special adaptations of TCP, UDP or IP for interworking of IP based networks with other networks 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • the present invention relates to transferring information between storage devices and a network via a switched, packetized communications system.
  • the present invention relates to methods and apparatus for receiving, translating, and routing data packets between SCSI (Small Computer Systems Interface), Fibre Channel and Ethernet devices in a flexible, programmable manner.
  • SCSI Small Computer Systems Interface
  • Fibre Channel Fibre Channel
  • LAN Local Area Network
  • SANs Storage Area Networks
  • the current SAN paradigm assumes that the entire network is constructed using Fibre Channel switches. Therefore, most solutions involving SANs require implementation of separate networks: one to support the normal LAN and another to support the SAN.
  • New equipment and technology such as new equipment at the storage device level (Fibre Channel interfaces), the host/server level (Fibre Channel adapter cards) and the transport level (Fibre Channel hubs, switches and routers), into a mission-critical enterprise computing environment could be described as less than desirable for data center managers, as it involves replication of network infrastructure, new technologies (i.e., Fibre Channel), and new training for personnel.
  • Most companies have already invested significant amounts of money constructing and maintaining their network (e.g., based on Ethernet and/or ATM). Construction of a second high-speed network based on a different technology is a significant impediment to the proliferation of SANs. Therefore, a need exists for a method and apparatus that can alleviate problems with access to storage devices by multiple hosts, while retaining current equipment and network infrastructures, and minimizing the need for new training for data center personnel.
  • SCSI, Fibre Channel and Ethernet are protocols for data transfer, each of which uses a different individual format for data transfer.
  • SCSI commands were designed to be implemented over a parallel bus architecture and therefore are not packetized.
  • Fibre Channel like Ethernet, uses a serial interface with data transferred in packets.
  • the physical interface and frame formats between Fibre Channel and Ethernet are not compatible.
  • Gigabit Ethernet was designed to be compatible with existing Ethernet infrastructures and is therefore based on an Ethernet packet architecture. Because of these differences there is a need for new methods and apparatus to allow efficient communication between these protocols.
  • the present invention solves the above and other problems, thereby advancing the state of the useful arts, by providing methods and apparatus for transferring data between storage device interfaces and network interfaces.
  • IP Internet Protocol
  • the present invention brings sophisticated SAN capabilities to existing enterprise computing configurations, without the installation of costly Fibre Channel switches and hubs, by providing the means for Internet Protocol (IP) devices to transparently communicate with SCSI and Fibre Channel devices over an IP network.
  • IP Internet Protocol
  • the present invention accomplishes this through the use of Fibre Channel Protocol (FCP), an industry standard developed for implementation of SCSI commands over a Fibre Channel network.
  • FCP Fibre Channel Protocol
  • the invention allows the storage devices to retain the use of standard SCSI and Fibre Channel storage interfaces and construct a SAN using a company's existing network infrastructure. Therefore, no changes are required in host bus adapters (HBA) or storage devices (e.g. disk drives, tape drives, etc).
  • HBA host bus adapters
  • storage devices e.g. disk drives, tape drives, etc.
  • the device interfaces may be either SCSI, Fibre Channel or IP interfaces such as Gigabit Ethernet. Data is switched between SCSI and IP, Fibre Channel and IP, or between SCSI and Fibre Channel. Data can also be switched from SCSI to SCSI, Fibre Channel to Fibre Channel and IP to IP.
  • the port interfaces provide the conversion from the input frame format to an internal frame format, which can be routed within the apparatus.
  • the apparatus may include any number of total ports. The amount of processing performed by each port interface is dependent on the interface type.
  • the processing capabilities of the present invention permit rapid transfer of information packets between multiple interfaces at latency levels meeting the stringent requirements for storage protocols.
  • the configuration control can be applied to each port on a switch and, in turn, each switch on the network, via an SNMP or Web-based interface, providing a flexible, programmable control for the apparatus.
  • a method for routing data packets in a switch device in a network such as a SAN.
  • the method typically comprises the steps of receiving a packet from a first network device at a first port interface of the switch device, wherein the packet is one of a SCSI formatted packet (i.e., SCSI formatted data stream converted into a packet), a Fibre Channel (FC) formatted packet and an Internet protocol (IP) formatted packet, wherein the first port interface is communicably coupled to the first network device, and converting the received packet into a packet having an internal format.
  • SCSI formatted packet i.e., SCSI formatted data stream converted into a packet
  • FC Fibre Channel
  • IP Internet protocol
  • the method also typically includes the steps of routing the internal format packet to a second port interface of the switch device, reconverting the internal format packet to one of a SCSI formatted packet, an FC formatted packet or an IP formatted packet, and transmitting the reconverted packet to a second network device communicably coupled to the second port interface.
  • a network switch device typically comprises a first port interface including a means for receiving data packets from a network device, wherein the receiving means receives one of a SCSI formatted packet and a Fibre Channel (FC) formatted packet from a first network device, and a means for converting received packets into packets having an internal format, wherein the received data packet is converted into a first packet having the internal format.
  • the switch device also typically comprises a second port interface including a means for reconverting packets from the internal format to an IP format, wherein the first packet is converted into a packet having an IP format, and a means for transmitting IP packets to a network, wherein the IP formatted packet is transmitted to an IP network.
  • a means for routing the first packet to the second port interface is also provided.
  • a network switch device typically comprises a first port interface including a means for receiving data packets from an IP network, wherein the first interface means receives a packet in an IP format, and a means for converting received packets into packets having an internal format, wherein the received packet is converted into a first packet having an internal format.
  • the switch device also typically comprises a second port interface including a means for reconverting packets having the internal format to packets having the SCSI format, and a means for transmitting reconverted packets to a SCSI network device.
  • the switch device further typically includes a third port interface having a means for reconverting packets having the internal format to packets having the FC format, and a means for transmitting reconverted packets to a FC network device.
  • a means for routing packets between the first, second and third port interfaces is also typically provided. In operation, wherein if the first packet is routed to the second port interface, the first packet is converted to the SCSI format and transmitted to the SCSI network device, and wherein if the first packet is routed to the third port interface, the first packet is converted to the FC format and transmitted to the FC network device.
  • a network switch device for use in a storage area network (SAN).
  • the switch device typically comprises a first port interface communicably coupled to a SCSI device, wherein the first port interface converts SCSI formatted data packets received from the SCSI device into data packets having an internal format, and wherein the first port interface converts data packets having the internal format into SCSI formatted data packets.
  • the switch device also typically comprises a second port interface communicably coupled to a FC device, wherein the second port interface converts FC formatted data packets received from the FC device into data packets having the internal format, and wherein the second port interface converts data packets having the internal format into FC formatted data packets.
  • the switch device further typically includes a third port interface communicably coupled to a IP device, wherein the third port interface converts IP formatted data packets received from the IP device into data packets having the internal format, and wherein the third port interface converts data packets having the internal format into IP formatted data packets, and a switch fabric for routing data packets having the internal format between the first, second and third port interfaces.
  • a third port interface communicably coupled to a IP device, wherein the third port interface converts IP formatted data packets received from the IP device into data packets having the internal format, and wherein the third port interface converts data packets having the internal format into IP formatted data packets, and a switch fabric for routing data packets having the internal format between the first, second and third port interfaces.
  • the port interface coupled to the first device converts the first data packet to a packet having the internal format and routes the internal format packet through the switch fabric to the port interface coupled to the second device, wherein the port interface coupled to the second device reconverts the internal format packet into the format associated with the second device and sends the reconverted packet to the second device.
  • a network switch device for use in a storage area network (SAN) is provided.
  • the switch may comprise any combination of Fibre Channel, SCSI, Ethernet and Infiniband ports, and may comprise any number of total ports.
  • the switch device typically comprises a first port interface communicably coupled to one of a SCSI device(s), an FC device, or an IP device, a second port interface, wherein the second port interface is configurable to communicate with either a FC device or an Ethernet device, and a switch fabric for routing data packets having the internal format between the first and second port interfaces.
  • the second port interface when the second port interface is configured to communicate with a FC device, the second port interface converts FC formatted data packets received from the FC device into data packets having an internal format, and wherein the second port interface converts data packets having the internal format received from the switch fabric into FC formatted data packets, and wherein when the second port interface is configured to communicate with an Ethernet device, the second port interface converts Ethernet formatted data packets received from the Ethernet device into data packets having the internal format, and wherein the second port interface converts data packets having the internal format received from the switch fabric into Ethernet formatted data packets.
  • the second port interface can be either self-configurable or user configurable.
  • FIG. 1 illustrates an example of a SAN constructed according to the present invention
  • FIG. 2 is a block diagram of an overview of the Storage over Internet Protocol (SoIP) implementation
  • FIG. 3 illustrates the required protocol conversion steps between Fibre Channel, SCSI and IP devices in the apparatus switch fabric according to an embodiment of the present invention
  • FIG. 4 is an overview of the legacy storage protocol conversion method by which the functionality of the invention is achieved
  • FIG. 5 is a high level switch diagram outlining the basic architecture of the physical apparatus according to an embodiment of the present invention.
  • FIGS. 6 a - c illustrate FCP packet encapsulation according to an embodiment of the present invention
  • FIG. 7 shows the frame flow for the “session” initialization for Fibre Channel devices connected to an SoIP network
  • FIGS. 8 and 9 show the flow of data frames for a node login initiated by FC port A of switch 1 to FC Port B of switch 2 located remotely according to an embodiment of the present invention
  • FIG. 10 shows the routing of Port Login Request and Response frames for local FC ports according to an embodiment of the present invention
  • FIG. 11 shows an example of the address domains which exist in a network according to one embodiment of the present invention.
  • FIGS. 12 a - d illustrate a network architecture and address tables for a Third Party Command example
  • FIG. 13 illustrates layer 2 FCP packet encapsulation according to an embodiment of the present invention
  • FIGS. 14 a - c illustrate examples of UDP Frame demultiplexing according to embodiments of the present invention
  • FIG. 15 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet according to an embodiment of the present invention
  • FIG. 16 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet, where two routing blocks are combined into a single block according to an embodiment of the present invention
  • FIG. 17 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet wherein low-level port interface logic blocks are combined according to an embodiment of the present invention
  • FIG. 18 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet using a Field Programmable Gate Array (FPGA) according to an embodiment of the present invention
  • FIG. 19 shows a block diagram of a common FC/Gigabit Ethernet port combined with a GBIC interface according to an embodiment of the present invention.
  • FIG. 20 illustrates the architecture of an intelligent network interface card (NIC) according to an embodiment of the present invention.
  • NIC network interface card
  • FIG. 1 illustrates an example of a storage area network (SAN) 10 according to an embodiment of the present invention.
  • network 10 includes numerous storage devices, such as tape libraries 15 , RAID drives 20 and optical drives 25 (e.g., CD, DVD, etc.) and servers 30 .
  • the storage devices can be either storage targets (e.g., tape libraries 15 , RAID drives 20 , etc.) or initiators (e.g., servers 30 ). Note that a device could be both an initiator and a target.
  • the invention is implemented in a switching device 35 within network 10 . For example, as shown in FIG.
  • each switching device 35 is an “edge” switch which provides the connectivity between nodes (i.e., one or more storage devices) and a network 40 .
  • the switch resides on the “edge” of the network where the devices are located.
  • Each edge switch 35 allows connected storage elements to communicate through the edge switch with no traffic being sent to network 40 .
  • Each edge switch 35 also allows storage elements connected to different edge switches to communicate with each other through network 40 .
  • network 40 is an Ethernet network, but other networks may be used, for example, Asynchronous Transfer Mode (ATM)-based or FDDI-based networks, or the like.
  • ATM Asynchronous Transfer Mode
  • a switching device 35 is implemented in an SoIP (Storage over Internet Protocol) storage area network (SAN) as shown in FIG. 2 .
  • SoIP is a framework for transporting SCSI commands and data over IP networks using the Fibre Channel Protocol for SCSI (FCP) for communication between IP networked storage devices.
  • FCP Fibre Channel Protocol for SCSI
  • FCP is an FC-4 Upper Layer Protocol for sending SCSI commands and data over a Fibre Channel network yielding a “serial” SCSI network.
  • the SoIP framework enables FCP for use on an IP network by defining the SoIP protocol.
  • Storage devices and host bus adapters operating the SoIP protocol form a storage area network (SAN) directly on an IP network. This framework offers an enormous advantage in the installation and utility of SANs.
  • each SoIP device 50 converts SCSI commands and data into FCP data frames in FCP block 52 .
  • the SoIP protocol layer block 54 then encapsulates these FCP frames in multiple IP packets using either the User Datagram Protocol (UDP) or Transport Control Protocol (TCP).
  • IP port 56 forwards the packet to IP network 60 , which routes the IP packets between the devices 50 or to switch 35 .
  • IP network 60 is preferably an Ethernet network, but may be based on any IP-compatible media including ATM, FDDI, SONET and the like.
  • the storage name server 65 serves as a database where devices store their own information and retrieve information on other devices in the SoIP network.
  • the SoIP proxy 70 performs protocol conversion between SoIP based on UDP and SoIP based on TCP.
  • FIG. 3 illustrates data exchange between storage devices using a switch 135 according to an embodiment of the present invention.
  • switch 135 is configured to receive data from different interfaces, each of which has a different data or frame format.
  • SCSI device 105 transmits data using a “parallel” SCSI interface 106
  • Fibre Channel (FC) device 110 transmits data using Fibre Channel interface in
  • Ethernet device 115 transmits data using Ethernet interface 116 .
  • Switch 135 translates data received from a source port in one of the three different formats into an internal format and transfers the data in the internal format through switch fabric 140 to a destination port. The destination port translates the data back into the native format appropriate for the connection thereto.
  • FC Fibre Channel
  • each device e.g., SCSI device 105 , FC device no, Ethernet device 115 , or generic IP device 120 (e.g., disk drive, tape drive, server), performs storage operations based on the SCSI Command Set.
  • SCSI device no the SCSI commands and data are converted to FCP and transmitted using Fibre Channel interface in.
  • SCSI device 105 the SCSI commands and data are transferred directly using a “parallel” bus 106 .
  • the SCSI port interface 125 of switch 135 acts like a SCSI to FC bridge so that the SCSI port looks like an FC port from the point of view of switch fabric 140 .
  • the SCSI data is preferably converted to FCP, and is not actually transmitted using a Fibre Channel interface.
  • Ethernet device 115 SCSI commands and data are converted to FCP and then encapsulated in an IP packet using UDP or TCP. The IP packet is then encapsulated in an Ethernet frame and transmitted using Ethernet interface 116 .
  • SCSI device implies a device with a “parallel SCSI bus” while the term “Fibre Channel device” implies a device with a Fibre Channel interface. Both devices operate as SCSI devices at the command level.
  • SCSI device 105 does not convert the SCSI commands and data to an FCP format. Therefore, it is not possible to transfer data between FC device no and SCSI device 105 directly. As shown in FIG.
  • FIG. 3 shows a storage device constructed using Ethernet in the same manner as a device is constructed with FC. Ethernet simply replaced Fibre Channel as the media for transport.
  • Infiniband may also be implemented, for example in generic IP device 120 . As is well known, Infiniband is an I/O interface that merges the work of NGIO (Next Generation I/O) and Future I/O.
  • FIG. 4 illustrates data exchange between Fibre Channel, SCSI and IP devices in switch apparatus 135 according to an embodiment of the present invention.
  • the example in FIG. 4 is for an Ethernet based IP network 160 , however any other IP networks based on other protocols such as ATM, FDDI, etc. may be used.
  • FIG. 4 shows the protocol translations which occur for each device.
  • SCSI device 105 communicates with switch 135 using SCSI commands directly with no encapsulation of data or commands in data frames.
  • FC device no uses the FCP protocol to send SCSI commands and data to switch 135 .
  • Switch 135 converts the received data to a common protocol based on FCP to allow the devices to communicate with each other.
  • switch 135 performs address translation between the Fibre Channel and SCSI addressing schemes to the IP addressing method as will be discussed in more detail below. This is done transparently so that no changes are required in Fibre Channel device no or SCSI device 105 , or in any host bus adapters, driver software or application software.
  • FIG. 5 is a high-level switch diagram outlining the basic architecture of a physical switch apparatus 235 according to an embodiment of the present invention.
  • switch 235 includes three main elements: switch fabric 240 , management processor 250 and port interfaces 270 .
  • Switch fabric 240 provides a high bandwidth mechanism for transferring data between the various port interfaces 270 as well as between port interfaces 270 and management processor 250 .
  • Management processor 250 performs management related functions for switch 235 (e.g. switch initialization, configuration, SNMP, Fibre Channel services, etc.) primarily through management bus 255 .
  • Port interfaces 270 convert data packets from the input frame format (e.g., parallel SCSI, FC, or Ethernet) to an internal frame format. The internal frame format data packets are then routed within switch fabric 240 to the appropriate destination port interface. Port interfaces 270 also determine how packets are routed within the switch. The amount of processing performed by each port interface 270 is dependent on the interface type. SCSI ports 270 .sub. 1 and 270 .sub. 2 provide the most processing because the SCSI interface is half-duplex and it is not frame oriented. The SCSI port interfaces 270 .sub. 1 and 270 .sub. 2 also emulate the functionality of a SCSI host and/or target. Fibre Channel ports 270 .sub. 3 and 270 .sub.
  • IP ports 270 .sub. 5 and 270 .sub. 6 (e.g., Ethernet ports) and SCSI ports 270 .sub. 1 and 270 .sub. 2 convert data received into an internal frame format before sending the packets through switch fabric 240 .
  • FCP frames are not directly compatible with an Ethernet interface as they are with a Fibre Channel interface, the transmission of FCP packets on an Ethernet interface requires that an FCP frame be encapsulated in an Ethernet frame as shown in FIG. 6 a.
  • FIG. 6 a illustrates FCP packet encapsulation in an IP frame carried over an Ethernet frame according to an embodiment of the present invention.
  • Field Definitions for FIG. 6 a include the following:
  • TYPE The Ethernet packet type.
  • CHECKSUM PAD An optional 2-byte field which may be used to guarantee that the UDP checksum is correct even when a data frame begins transmission before all of the contents are known.
  • the CHECKSUM PAD bit in the SoIP Header indicates if this field is present.
  • ETHERNET CRC Cyclic Redundancy Check (4 bytes).
  • the SoIP Header field contain the following parameters:
  • CLASS This 4-bit field indicates the class of service. In one embodiment, only the values 2 or 3 are used.
  • SoIP FLAGS This 8-bit field contains bits that indicate various parameters for a data frame as shown in FIG. 6 b.
  • the User Datagram Protocol (UDP) Header is the protocol used within the IP packet. TCP may also be used.
  • the UDP header defined in RFC 768, is 8 bytes in length consisting of four 16-bit fields as shown in FIG. 6 c , with the following field definitions:
  • SOURCE PORT An optional field. When meaningful, it indicates the port of the sending process, and may be assumed to be the port to which a reply should be addressed in the absence of other information. If not used, a value of zero is inserted.
  • DESTINATION PORT has a meaning within the context of a particular internet destination address.
  • the UDP Length is the sum of the UDP Header Length, FCP Header length, and FCP Payload length and optionally the checksum pad.
  • CHECKSUM the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with bytes of zero at the end (if necessary) to make a multiple of 2 bytes.
  • a switch 235 encapsulates FC packets into an Ethernet Frame with a “wrapper” around the FC information.
  • the encapsulation of an FCP data frame in an Ethernet packet may require that the FCP data frame be limited in size because the maximum FCP data frame size is 2136 bytes (24 byte header+2112 byte payload) whereas an Ethernet packet has a maximum size of 1518 bytes.
  • the use of Ethernet Jumbo Frames which permit packet sizes up to 9 Kbytes to be used, eliminates the need to limit the Fibre Channel frame size.
  • support for Ethernet jumbo frames is limited within the existing network infrastructure. Therefore, FCP data frames need to be limited otherwise a large FCP data frame may need to be “fragmented” into 2 separate Ethernet frames.
  • the Login procedures defined in the Fibre Channel standard allows devices to negotiate the maximum payload with the switch fabric 240 .
  • the switch fabric 240 can respond to a login with a smaller payload size than the maximum (e.g., 1024 bytes).
  • Switch 235 makes use of this fact to limit FC packets to a size which can be encapsulated in an Ethernet packet to eliminate the need for fragmenting FC packets.
  • a node's maximum receive data field size is provided to switch fabric 240 during “Fabric Login” and to each destination node during “Port Login.”
  • the fabric or node being “logged into” generates a login response which indicates the maximum receive data field size for data frames it is capable of receiving. Note that these values may not be the same.
  • a fabric may have the maximum allowed size of 2112 bytes while a node may limit the maximum size to 1024 bytes (e.g. the Hewlett-Packard Tachyon-Lite Fibre Channel Controller).
  • a source node may not transmit a data frame larger than the maximum frame size as determined for the login response.
  • an upper limit is placed on the frame payload size during login by a device.
  • the upper limit value is set by determining or discovering the maximum IP datagram size and subtracting 60 bytes to account for the various headers and trailers. For example, for an Ethernet Frame, the upper limit value equals 1440 bytes. That is, the payload for an FCP Frame cannot exceed 1440 bytes in size. This limit is established because an FCP Frame being transported across an IP network will not be allowed to fragment. Allowing IP datagrams to fragment degrades network performance and so most networks rarely fragment. An IP header's Do Not Fragment Flag can be used to prevent the IP layer from fragmenting the datagram. Even with node login setting an appropriate size for the FCP payload, this bit is set to ensure that fragmentation does not occur.
  • the payload is padded to a multiple of 4 bytes to make it easy to convert frames being sent to legacy FC devices.
  • Each switch 235 preferably makes use of the Buffer to Buffer Receive Data Field size to force end nodes to communicate with data frames that will fit within an IP packet carried over an Ethernet link.
  • one method for enforcing the maximum frame size is to intercept Node Login packet which can be transmitted across an Ethernet network without being fragmented. Therefore, each Management Processor may need to perform MTU (Maximum Transmission Unit) discovery to determine a size which does not result in fragmentation of IP packets in the network.
  • MTU Maximum Transmission Unit
  • the switch When an FC port performs a Port Login with an FC port which is local (i.e. connected to the same switch), it is not necessary to change the Buffer to Buffer Receive Data Field Size of the Login request or response. This is because, in one embodiment, the switch supports the maximum frame size for transfers between FC ports (on the same switch). However, the FC port interface logic will always redirect the Port Login packets to the switch's Management Processor to simplify the port interface logic. Thus, in this embodiment, the switch looks and acts like an FC switch from the point of view of any FC devices connected thereto. An example of the routing of Port Login Request and Response frames for local FC ports is shown in FIG. 10 .
  • routing FC Port Login Request/Response packets to the Management Processor allows the Port Login for SCSI ports to be handled by the Management Processor.
  • the Management Processor always handles login for SCSI.
  • an SoIP device is uniquely identified using two parameters: an IP address and an SoIP socket number. Therefore, it is possible for a device to have a unique IP address or for multiple devices to share an IP address. For example, all of the devices on a Fibre Channel arbitrated loop may share an IP address while a server Host Bus Adapter may have a dedicated IP address. In one embodiment, there are two possible modes for assignment of the SoIP socket number: local or global.
  • a single SoIP device connected directly to an IP network must have a unique IP address in order for the network to be able to route data frames to the device.
  • An IP network will not route traffic based on the SoIP socket number.
  • devices connected to a switch e.g., switch 235
  • an SoIP network SAN with “legacy” Fibre Channel devices attached has different address domains due to the two different address methods used: IP and Fibre Channel.
  • FIG. 11 shows an example of the address domains which exist in a network according to one embodiment of the present invention.
  • SoIP devices communicate using IP addresses and the SoIP socket numbers while the Fibre Channel devices (SCSI devices are treated as Fibre Channel devices by a switch) use Fibre Channel addresses.
  • Each switch 235 performs address translation between the IP and Fibre Channel address domains.
  • Switch 235 .sub. 1 performs address translation between the IP address domain and FC address domain 1
  • Switch 235 .sub. 2 performs address translation between the IP address domain and FC address domain 2 .
  • Each switch 235 assigns an IP address, SoIP socket number and Fibre Channel address to each Fibre Channel device when the device performs a fabric login.
  • a Fibre Channel device only learns about its assigned Fibre Channel address.
  • the assigned IP address, SoIP socket number and Fibre Channel Address are maintained within a translation table (not shown) in the switch.
  • Parallel SCSI devices are assigned their addresses by the switch during initialization of the SCSI port.
  • the Fibre Channel ports direct all Name server requests by a Fibre Channel device to the management processor for processing.
  • the management processor converts Fibre Channel Name Server requests into SoIP Name Server requests that are then forwarded to the SoIP Name Server, e.g., implemented in server 280 .
  • the SoIP name server functionality is distributed and thus handled directly by the management processor.
  • Responses from the name server are returned to the management processor where they are converted into Fibre Channel Name Server responses before being forwarded to the port that originated the name server request.
  • switch 235 converts the packet into an SoIP compatible packet. The conversion encapsulates the FCP data frame in an IP data frame as described above.
  • the IP addresses and SoIP socket numbers are derived by using the Fibre Channel source address (S_ID) and the destination address (D_ID) as “keys” into the IP/Fibre Channel address conversion table on the name server.
  • the Fibre Channel address fields are replaced by the SoIP socket numbers when translating a Fibre Channel data frame to an SoIP data frame.
  • the packet is then transmitted on the IP network and routed using the destination IP address. If the destination device is an SoIP compatible device, the packet is processed directly (i.e., de-encapsulated and processed as an FCP packet) by the destination device.
  • the packet is routed to a switch 235 , which receives the packet, de-encapsulates the SoIP packet and replaces the SoIP socket numbers with the appropriate source and destination Fibre Channel addresses based on the source and destination IP addresses and SoIP socket numbers.
  • a switch 235 which receives the packet, de-encapsulates the SoIP packet and replaces the SoIP socket numbers with the appropriate source and destination Fibre Channel addresses based on the source and destination IP addresses and SoIP socket numbers.
  • local assignment is the preferred method for assigning SoIP socket numbers.
  • native SoIP devices select their SoIP socket numbers while an SoIP switch (e.g., switch 235 ) assigns the SoIP socket number for Fibre Channel and SCSI devices attached to the switch.
  • the SoIP socket number is assigned locally, the value chosen may be any value that results in a unique IP Address/SoIP socket number combination.
  • Devices that share an IP address must be assigned unique SoIP socket numbers in order to create a unique IP Address/SoIP socket number pair.
  • Devices that have a unique IP address may have any desired SoIP socket number.
  • an SoIP switch assigns the SoIP socket numbers in such a manner as to simplify the routing of received data frames.
  • a switch must also assign a locally significant Fibre Channel address to each “remote” device for use by the local devices in addressing the “remote” devices. These locally assigned addresses are only known by a switch within its Fibre Channel address domain. Thus each switch maintains a set of locally assigned Fibre Channel addresses which correspond to the globally known IP Address/SoIP Port Number pairs defined in the SoIP Name Server.
  • each switch 235 intercepts Fibre Channel Extended Link Service requests and responses which have Fibre Channel address information embedded in the payload. Extended Link Service requests and responses are generated infrequently. Therefore, it is acceptable to redirect the Extended Link Service requests to the switch's management processor which makes any necessary changes to the data frame. If an Extended Link Service request/response has no addressing information embedded in the payload, the Management Processor simply retransmits the packet with no modifications.
  • the IP Address and SoIP socket number assigned to a Fibre Channel or SCSI device are determined by the switch. The assignment of these addresses is implementation dependent. In a preferred embodiment, the SoIP socket number is assigned the device's local Fibre Channel address. In this embodiment, the switch obtains the local Fibre Channel address directly from the received data frame. Alternatively, assignment of the SoIP socket number is based on an incrementing number that can be used as an index into an address table.
  • each device is assigned a unique IP address.
  • this type of assignment may result in the use of a large number of IP addresses.
  • the use of a single IP address for each device also has implications for routing in the IP network. Therefore, in a preferred embodiment, IP addresses are assigned such that at least a subset of a switch's attached devices share an IP address. For example, an IP address can be assigned to each switch port. Each device attached to that switch port then shares the port's IP address. Thus, an attached Fibre Channel N_Port would have a unique IP address while the devices on a Fibre Channel arbitrated loop attached thereto would share an IP address.
  • Fibre Channel addresses are assigned globally. Globally assigned Fibre Channel addresses provide the maximum compatibility for “legacy” Fibre Channel devices.
  • the SoIP name server is responsible for managing the allocation of Global Fibre Channel Addresses.
  • a global Fibre Channel address space may need to be supported because in some cases Fibre Channel addresses may be embedded within “third-party” SCSI commands.
  • An example of such a third-party command is COPY.
  • the COPY command instructs another device to copy data.
  • the use of “third-party” commands is rare but when used, either the command would need to be modified for address compatibility or the Fibre Channel addresses would need to be globally assigned.
  • an example third party COPY command will be used to illustrate a problem that occurs with locally assigned Fibre Channel addresses and third-party commands.
  • locally assigned Fibre Channel Addresses are also used as the SoIP socket number.
  • Each device has a unique IP address in this example.
  • FIG. 12 b shows the IP Address and SoIP socket number each device has advertised to the Name Server which identifies how the device is addressed within the SoIP network. Each device is uniquely identified by the combination of IP Address and SoIP socket number. Assume that the switches 235 .sub. 3 and 235 .sub. 4 and Tape Library C are aware of every device in the system. Tape Library C would then have an address table that is the same as the name server's address table. Switches 235 .sub. 3 and 235 .sub. 4 will have assigned local Fibre Channel addresses to each device.
  • FIG. 12C illustrates the address table stored on switch 235 .sub. 3
  • FIG. 12 d illustrates the address table stored on switch 235 .sub. 4 . Because the Fibre Channel addresses are assigned locally, the address assignment is purely arbitrary.
  • Server A in local domain 1 sends a COPY command to Server B in local domain 3 indicating that data is to be copied from RAID drive B to Tape Library B, both of which are located in local domain 3 .
  • the COPY command will contain the addresses from Server A's perspective. Therefore, referring to FIG. 12C , the command received by Server B is COPY from Fibre Channel device 000500 (RAID drive B) to Fibre Channel device 000600 (Tape Library B). However, Server B will interpret the COPY command using the address table of switch 235 .sub. 4 ( FIG. 12 d ) and assume it should copy data from RAID drive A to Tape Library A and not RAID drive B to Tape Library B. Thus, the wrong operation will be performed.
  • Server B sends a command to Server A to copy from RAID drive A to Tape Library C.
  • the command will be COPY from Fibre Channel address 000500 to 009900 (the addresses are from the perspective of switch 235 .sub. 4 ).
  • Server A will assume the command is to copy data from RAID drive B to a nonexistent device because 009900 is not in the address table of switch 235 .sub. 4 .
  • the switch gets around this problem by intercepting each third party command and modifying the embedded Fibre Channel addresses to be compatible with the destination device.
  • this requires that the source switch know the assignment of local addresses in the destination switch. While it is possible for a switch to convert the third-party commands, alternative methods are preferred.
  • Fibre Channel addresses are globally assigned for devices that are referenced by Fibre Channel address in third-party commands.
  • the use of a Global Fibre Channel address allows third-party commands to be used with no modification, but sets the total number of devices possible in an SoIP network to the same maximum as a Fibre Channel network. Only those devices that are referenced in a third-party command require a global address, although all devices within an SoIP network can be assigned global addresses.
  • a Globally Assigned Fibre Channel address is preferably used as the device's SoIP socket number. This simplifies the conversion of “legacy” Fibre Channel data frames to SoIP compatible data frames. Therefore, globally assigning Fibre Channel addresses is equivalent to globally assigning SoIP socket numbers.
  • Global SoIP socket number allocation is managed by the SoIP Name Server, which allocates Global SoIP socket numbers as requested from a pool of free socket numbers, and deallocates socket numbers (returns them to the free pool) when they are no longer used.
  • the assignment of Global SoIP socket numbers for all devices in an SoIP network is the simplest solution from a management standpoint because it does not require specifying the subset of devices that require a Global SoIP socket number (or alternatively, the devices that can use a local SoIP socket number).
  • All devices in an SoIP network either have a locally assigned SoIP socket number or a globally assigned SoIP socket number. All SoIP compatible devices and switches support both modes. Each device or switch determines from the SoIP Name Server which mode is to be supported when it logs into the network. An SoIP Name Server configuration parameter indicates the SoIP socket number allocation mode.
  • the requester when SoIP socket numbers are assigned globally, the requester indicates the minimum number of socket numbers requested and a 24-bit mask defining the boundary. For example, a 16-port switch may request 4096 socket numbers with a bit mask of FFFooo (hex) indicating that the socket numbers should be allocated on a boundary where the lower 12 bits are 0. The switch would then allocate 256 socket numbers to each port (for support of an arbitrated loop). Allocation of socket numbers on a specified boundary allows the switch to allocate socket numbers that directly correlate to port numbers. In the above example, bits 11 : 8 would identify the port. Native SoIP devices preferably allocate only one global SoIP socket number from the SoIP Name Server.
  • the SoIP Name Server also includes a configuration parameter that selects “Maximum Fibre Channel Compatibility” mode which only has meaning for Global assignment of SoIP socket numbers. Devices are able to query the Name server for the value of this parameter. When enabled, this mode specifies that global SoIP socket numbers are to be allocated in blocks of 65536 (on boundaries of 65536) to switches. This mode is compatible with the existing Fibre Channel modes of address allocation where the lower 8 bits identify the device, the middle 8 bits identify the port and the upper 8 bits identify the switch. SoIP switches check for this mode and, if enabled, request 65536 socket numbers when requesting global SoIP socket numbers. In this mode, Native SoIP devices preferably allocate only one global SoIP socket number from the SoIP Name Server.
  • the frame format when operating in a Layer 2 network (e.g., no IP routers), the frame format is modified to simplify the encapsulation logic.
  • a Layer 2 network does not require the IP Header or the UDP header. All frames are forwarded using the physical address (e.g. Ethernet MAC address).
  • a switch then routes frames internally based on the Layer 2 physical address (e.g. Ethernet MAC address) combined with the SoIP socket number.
  • the Layer 2 physical address replaces the IP address as a parameter in uniquely identifying an SoIP device.
  • FIG. 13 shows the frame format for an FCP frame transmitted on Ethernet.
  • An Ethernet Type value 290 is defined specifically for SoIP to allow a station receiving the frame to distinguish the frame from other frame types (e.g., IP).
  • the IP and UDP headers have been removed which reduces the frame overhead.
  • An advantage is that the length and checksum fields in the UDP header no longer need to be generated.
  • the generation of the IP and UDP headers introduces additional latency for the frame transmission because the length and checksum are located at the beginning of the frame. Therefore, it is necessary to buffer the entire frame to determine the length and checksum and write them into the header.
  • For an Ethernet Layer 2 SoIP frame it is only necessary to determine the amount of padding, if any, added at the end of the frame.
  • the number of PAD bytes must be included in the SoIP Header to allow the PAD bytes to be removed at the receiving station. Since the padding is only required to satisfy a minimum Ethernet frame size of 64 bytes, it is possible to complete the header generation after 64 bytes of the frame (or the entire frame) have been received.
  • the Layer 2 frame format is similar to the Layer 3 frame format SoIP Frame conversion described above with reference to FIG. 6 with the following differences:
  • IP and UDP headers are no longer present.
  • Ethernet Type value is different.
  • the CHEKSUM PAD field is replaced by the FC CRC field.
  • the FC CRC field is a 4-byte field containing the Fibre Channel CRC calculated over the FCP header and payload. This field may be inserted by a source when a Fibre Channel data frame is encapsulated with no changes. Thus, the CRC received with the frame is still valid.
  • the CHECKSUM PAD flag is replaced by the FC CRC PRESENT flag. This bit indicates if the FC CRC field is present in the frame. Note that the CHECKSUM PAD field has no meaning since there is no need to calculate a UDP checksum.
  • the FRAME PAD LENGTH may have a non-zero value since the encapsulated frame length may be less than the Ethernet minimum of 64.
  • the UDP Header contains a Destination Port field and a Source Port field. The normal usage of these fields is to identify the software applications that are communicating with each other.
  • An application requests a port number for use when sending a UDP “datagram”. This port number becomes the source port number for each UDP datagram sent by the application.
  • the destination port number is used by the UDP layer to determine the application to which the datagram will be forwarded.
  • FIG. 14 a illustrates “demultiplexing” of UDP datagrams as is typical in the industry.
  • FIGS. 14 b and 14 c illustrate ways to add an SoIP layer according to embodiments of the present invention.
  • FIG. 14 b illustrates frame demultiplexing when there is a single port number assigned to all SoIP devices. Further demultiplexing is then performed using the SoIP socket number to determine the device. Routing data frames to applications is then performed based on the FCP exchange numbers located in the FCP header.
  • FIG. 14 c illustrates a similar example, but with separate UDP port numbers assigned to each SoIP device. In this case, it is not necessary to examine the SoIP socket number in order to forward the UDP datagram. (The SoIP socket number and IP address must still uniquely identify the device). The choice of whether to use a single UDP port number for each SoIP device or one UDP port number for all devices is implementation dependent.
  • the UDP demultiplexing examples illustrated in FIGS. 14 b and 14 c are oriented toward a server with one or more host bus adapters (where the host bus adapters are the SoIP devices).
  • a switch is generally less complicated in the sense that data frames are forwarded to end devices and the application layer does not have to be handled.
  • the addressing mechanisms described above allow software applications to appear as SoIP devices by registering with the name server using a different address. This opens up the possibility for applications to advertise themselves in the name server for use by other applications.
  • An example is a COPY manager that could be used by a higher level backup application.
  • each storage device when it registers with the name server, must include the UDP port number to use when sending data frames to the device.
  • the destination port would save the source port number for use in sending a reply.
  • this mechanism is not feasible for use with “legacy” FC switches since it requires the switch to associate the source port numbers with the exchange ID's. It is much simpler to require a storage device to always use the same UDP port number.
  • a storage device is identified by 3 parameters in the name server database: IP Address, UDP Port Number, and SoIP socket number.
  • An additional parameter required is the physical address (e.g. Ethernet MAC address) which is determined in the normal manner for IP networks.
  • ARP address resolution protocol
  • ARP address resolution protocol
  • the physical address to use can also be learned when a frame is received from a device. For example, the physical address can be learned when a Port Login request is received.
  • the physical address may not be the physical address of the actual device but the address of an IP router.
  • the SoIP Name Server must have a UDP Port number that is known by all of the SoIP devices within an SoIP network since the port number cannot be learned from another source. This could be a “well-known” port number or a registered port number. This approach is similar to a Domain Name Server (DNS) that has a well-known port number of 53. The assignment of “well-known” port numbers is done by the IANA (Internet Assigned Numbers Authority).
  • a conversation is a set of data frames that are related and which should arrive in order. However, it is assumed that conversations have no ordering relationship. In other words, the ordering of frames from different conversations can be changed with no effect. For example, assume that frames for 3 conversations (A, B and C) are transmitted in the following order (A1 sent first):
  • the frames for a particular conversation arrive in order with respect to each other, but out of order with respect to frames from other conversations.
  • the ability to identify different conversations allows load balancing to be performed by allowing traffic to be routed on a conversation basis.
  • Switches and routers can determine conversations based on several parameters within a data frame including Destination/Source addresses, IP Protocol, UDP/TCP Port Numbers, etc. The parameters actually used are dependent on the switch/router implementation.
  • FIG. 15 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Gigabit Ethernet according to an embodiment of the present invention.
  • the Fibre Channel and Gigabit Ethernet ports use the same encoding/decoding method (8B/10B) with each port requiring a serializer/deserializer (SERDES) block for converting to/from the high speed serial interface. Therefore, these two interfaces share the 8B/10B block 310 and SERDES block 315 in this embodiment as shown in FIG. 15 .
  • SERDES serializer/deserializer
  • These two interface types differ in clock speed with Fibre Channel operating at 1062.5 MHz and Gigabit Ethernet operating at 1250 MHz. Higher speed versions of these interfaces are being developed which will also have a different clock speed.
  • a multiplexer 345 selects the clock used by the logic based on the port type.
  • these two interfaces share the switch fabric interface logic block 320 which interfaces with the switch fabric (including the management interface).
  • the MAC blocks (blocks 325 and 330 ) implement the appropriate protocol state machines for the interface (Fibre Channel or Gigabit Ethernet).
  • the MAC blocks 325 and 330 convert received data into frames which are forwarded to the routing logic blocks 335 and 340 , respectively.
  • the MAC blocks 325 and 330 also receive data frames from the routing logic blocks 335 and 340 , respectively, which are then transmitted according to the interface's (Fibre Channel or Gigabit Ethernet) protocol.
  • Routing logic blocks 335 and 340 determine where each received frame should be routed based on addressing information within the frame. Routing logic blocks 335 and 340 also perform any modifications to the frames that are required. For example, a routing logic block will remove the SoIP encapsulation from a frame being forwarded to a Fibre Channel port. The routing logic block then sends the frame to the switch fabric with an indication of the destination output ports. Egress data frames (frames from the switch fabric to the output port) are received by a routing logic block and forwarded to the associated MAC. Additional processing may be performed on the frame by the routing logic block before the MAC receives the frame. For example, Ethernet port routing logic block 340 may convert a Fibre Channel frame into an SoIP frame.
  • routing logic block 350 includes logic blocks which are dependent on the port type and other blocks that are common to both port types. This optimization reduces the number of logic gates required on an ASIC. Routing block 350 determines where a frame is routed based on addressing information within the data frame. This function is known as address resolution and is performed for both Fibre Channel and Gigabit Ethernet data frames. Therefore, address resolution logic can be shared by these two port interfaces though it is necessary for the routing logic to select different data based on the port type.
  • Routing Logic block 350 can be implemented as hard coded logic or as a programmable method using a network processor, which is designed specifically for processing packets and which can be programmed to route either Fibre Channel frames or Ethernet frames. Therefore, the routing logic hardware can be shared by using different network processor software.
  • routing logic block 350 also includes an input and output FIFO memory which is shared by the two port interfaces. Additional logic which can be shared include statistics registers and control registers. Statistics registers are used to count the number of frames received, frames transmitted, bytes received, bytes transmitted, etc. A common set of statistics registers can be used. These registers are modified by control signals from each MAC. Control registers determine the operating mode of each MAC. A common set of statistics and control registers reduces the logic required to implement the registers and for interfacing with an external control source such as a switch management CPU.
  • the low-level port interface logic (e.g., FC MAC block 325 and Ethernet MAC block 330 ) is combined into a single MAC block 360 .
  • FC MAC block 325 and Ethernet MAC block 330 the low-level port interface logic
  • Ethernet MAC block 330 the low-level port interface logic
  • a Field Programmable Gate Array (FPGA) 370 is used to select the interface protocol supported by the port.
  • the FPGA configuration loaded would be based on the port type.
  • separate FPGA code is developed for the Fibre Channel and Gigabit Ethernet interfaces.
  • the FPGA logic can be optimized for the particular interface.
  • a single hardware design supports both interfaces, with software determining the FPGA code to be downloaded based on the port type.
  • a common port must also deal with the physical interface external to an ASIC.
  • such an interface may include, for example, a copper, multi-mode fiber or single-mode fiber interface.
  • the components are not necessarily the same between Fibre Channel and Ethernet.
  • a Gigabit Interface Converter (GBIC) 380 is provided to allow a user to select the desired physical interface.
  • a GBIC is a standardized module which has a common form factor and electrical interface and allows any of the many physical interfaces to be installed.
  • GBIC modules are available from many vendors (e.g. HP, AMP, Molex, etc.) and support all of the standard Fibre Channel and Gigabit Ethernet physical interfaces.
  • FIG. 19 shows a block diagram of a common FC/Gigabit Ethernet port interface (e.g., as shown in FIGS. 15 , 16 , 17 and 18 ) combined with a GBIC interface according to this embodiment.
  • the ASIC connects to a GBIC connector 385 which allows the user to change GBIC modules.
  • the user can select the media type by installing the appropriate GBIC 380 .
  • GBIC modules typically contain a serial EEROM whose contents can be read to determine the type of module (e.g. Fibre Channel, Gigabit Ethernet, Infiniband, Copper, Multi-mode, Single-mode, etc.).
  • the GBIC can thus indicate the type of interface, e.g., FC or GE or Infiniband, to use.
  • FC or GE Fibre Channel
  • Infiniband Copper
  • Multi-mode Single-mode
  • Single-mode Single-mode
  • the GBIC can thus indicate the type of interface, e.g., FC or GE or Infiniband, to use.
  • FC or GE Fibre Channel
  • FC or GE Infiniband
  • the port interface type is user switchable/configurable, and in another embodiment the type of the link interface is automatically determined through added intelligence, for example, through a “handshake”.
  • an SoIP intelligent network interface card (NIC) 400 is provided as shown in FIG. 20 .
  • NIC card 400 is able to send and receive both IP and SoIP traffic.
  • NIC card 400 has the intelligence to determine the type of traffic and direct it accordingly.
  • the host 410 may issue both storage commands and network commands to NIC card 400 through the PCI interface 420 . These commands are sent with a specified address which is used to direct the commands to either the Direct Path or the Storage Traffic Engine. Storage commands are issued via the SCSI Command Set, and Network commands are issued via Winsock and/or TCP/IP.
  • NIC card 400 directs storage commands to the Storage Traffic Engine 430 based on the specified address.
  • Storage Traffic Engine 430 handles the exchange management and sequence management for the duration of the SCSI operation.
  • SCSI operations are then carried out via SoIP and transmitted to the network 470 via a media access controller (MAC) block 450 , which in one embodiment is a Gigabit Ethernet MAC.
  • MAC media access controller
  • NIC card 400 directs non-SoIP traffic to the Direct Path 440 based on the specified address.
  • the Direct Path 440 processes the commands and transmits the specified packets to network 470 via block 450 .
  • NIC 400 demultiplexes the traffic and directs it accordingly.
  • Storage traffic received as SoIP is sent to storage traffic block 430 .
  • Non-SoIP traffic is sent directly to the host via direct path 440 .
  • the multiplexer block 460 handles arbitration for the output path when both Direct Path 440 and Storage Traffic Engine 430 simultaneously send traffic to MAC 450 .
  • Mux block 46 o demultiplexes the traffic and sends it accordingly to either Direct Path 440 or Storage Traffic Engine 430 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method and apparatus for transferring data between IP devices and SCSI or Fibre Channel devices. The device interfaces may be SCSI, Fibre Channel or IP interfaces. Data is switched between SCSI and IP, Fibre Channel and IP, or between SCSI and Fibre Channel. Data can also be switched from SCSI to SCSI, IP to IP and FC to FC. The port interfaces provide the conversion from the input frame format to an internal frame format, which can be routed within the apparatus. The amount of processing performed by each port interface is dependent on the interface type. The processing capabilities permit rapid transfer of information packets between multiple interfaces at latency levels meeting the stringent requirements for storage protocols. The configuration control can be applied to each port on a switch and, in turn, each switch on the network, via an SNMP or Web-based interface, providing flexible, programmable control.

Description

    CROSS REFERENCES TO RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 11/691,320 filed 26 Mar. 2007 entitled “Method and apparatus for transferring data between IP network devices and SCSI and Fibre Channel devices over an IP network,” which is a continuation of U.S. Pat. No. 7,197,047 also entitled “Method and apparatus for transferring data between IP network devices and SCSI and Fibre Channel devices over an IP network,” which is a continuation of U.S. Pat. No. 6,400,730, also entitled “Method and apparatus for transferring data between IP network devices and SCSI and Fibre Channel devices over an IP network,” which claims the benefit of priority pursuant to U.S.C. §119(e) of U.S. provisional application No. 60/123,606 filed 10 Mar. 1999 also entitled “Method and apparatus for transferring data between IP network devices and SCSI and Fibre Channel devices over an IP network,” which are all hereby incorporated by reference as though fully set forth herein.
  • BACKGROUND OF THE INVENTION
  • The present invention relates to transferring information between storage devices and a network via a switched, packetized communications system. In particular, the present invention relates to methods and apparatus for receiving, translating, and routing data packets between SCSI (Small Computer Systems Interface), Fibre Channel and Ethernet devices in a flexible, programmable manner.
  • In enterprise computing environments, it is desirable and beneficial to have multiple servers able to directly access multiple storage devices to support high-bandwidth data transfers, system expansion, modularity, configuration flexibility and optimization of resources. In conventional computing environments, such access is typically provided via file system level Local Area Network (LAN) connections, which operate at a fraction of the speed of direct storage connections. As such, access to storage systems is highly susceptible to bottlenecks.
  • Storage Area Networks (SANs) have been proposed as one method of solving this storage access bottleneck problem. By applying the networking paradigm to storage devices, SANs enable increased connectivity and bandwidth, sharing of resources, and configuration flexibility. The current SAN paradigm assumes that the entire network is constructed using Fibre Channel switches. Therefore, most solutions involving SANs require implementation of separate networks: one to support the normal LAN and another to support the SAN. The installation of new equipment and technology, such as new equipment at the storage device level (Fibre Channel interfaces), the host/server level (Fibre Channel adapter cards) and the transport level (Fibre Channel hubs, switches and routers), into a mission-critical enterprise computing environment could be described as less than desirable for data center managers, as it involves replication of network infrastructure, new technologies (i.e., Fibre Channel), and new training for personnel. Most companies have already invested significant amounts of money constructing and maintaining their network (e.g., based on Ethernet and/or ATM). Construction of a second high-speed network based on a different technology is a significant impediment to the proliferation of SANs. Therefore, a need exists for a method and apparatus that can alleviate problems with access to storage devices by multiple hosts, while retaining current equipment and network infrastructures, and minimizing the need for new training for data center personnel.
  • In general, a majority of storage devices currently use “parallel” SCSI or Fibre Channel data transfer protocols whereas most LANs use an Ethernet protocol, such as Gigabit Ethernet. SCSI, Fibre Channel and Ethernet are protocols for data transfer, each of which uses a different individual format for data transfer. For example, SCSI commands were designed to be implemented over a parallel bus architecture and therefore are not packetized. Fibre Channel, like Ethernet, uses a serial interface with data transferred in packets. However, the physical interface and frame formats between Fibre Channel and Ethernet are not compatible. Gigabit Ethernet was designed to be compatible with existing Ethernet infrastructures and is therefore based on an Ethernet packet architecture. Because of these differences there is a need for new methods and apparatus to allow efficient communication between these protocols.
  • SUMMARY OF THE INVENTION
  • The present invention solves the above and other problems, thereby advancing the state of the useful arts, by providing methods and apparatus for transferring data between storage device interfaces and network interfaces. In particular, the present invention brings sophisticated SAN capabilities to existing enterprise computing configurations, without the installation of costly Fibre Channel switches and hubs, by providing the means for Internet Protocol (IP) devices to transparently communicate with SCSI and Fibre Channel devices over an IP network. The present invention accomplishes this through the use of Fibre Channel Protocol (FCP), an industry standard developed for implementation of SCSI commands over a Fibre Channel network. The invention allows the storage devices to retain the use of standard SCSI and Fibre Channel storage interfaces and construct a SAN using a company's existing network infrastructure. Therefore, no changes are required in host bus adapters (HBA) or storage devices (e.g. disk drives, tape drives, etc).
  • According to the present invention, methods and apparatus are provided for transferring data between IP devices (including, but not limited to, Gigabit Ethernet devices) and SCSI or Fibre Channel devices. The device interfaces may be either SCSI, Fibre Channel or IP interfaces such as Gigabit Ethernet. Data is switched between SCSI and IP, Fibre Channel and IP, or between SCSI and Fibre Channel. Data can also be switched from SCSI to SCSI, Fibre Channel to Fibre Channel and IP to IP. The port interfaces provide the conversion from the input frame format to an internal frame format, which can be routed within the apparatus. The apparatus may include any number of total ports. The amount of processing performed by each port interface is dependent on the interface type. The processing capabilities of the present invention permit rapid transfer of information packets between multiple interfaces at latency levels meeting the stringent requirements for storage protocols. The configuration control can be applied to each port on a switch and, in turn, each switch on the network, via an SNMP or Web-based interface, providing a flexible, programmable control for the apparatus.
  • According to one aspect of the present invention, a method is provided for routing data packets in a switch device in a network such as a SAN. The method typically comprises the steps of receiving a packet from a first network device at a first port interface of the switch device, wherein the packet is one of a SCSI formatted packet (i.e., SCSI formatted data stream converted into a packet), a Fibre Channel (FC) formatted packet and an Internet protocol (IP) formatted packet, wherein the first port interface is communicably coupled to the first network device, and converting the received packet into a packet having an internal format. The method also typically includes the steps of routing the internal format packet to a second port interface of the switch device, reconverting the internal format packet to one of a SCSI formatted packet, an FC formatted packet or an IP formatted packet, and transmitting the reconverted packet to a second network device communicably coupled to the second port interface.
  • According to another aspect of the present invention, a network switch device is provided which typically comprises a first port interface including a means for receiving data packets from a network device, wherein the receiving means receives one of a SCSI formatted packet and a Fibre Channel (FC) formatted packet from a first network device, and a means for converting received packets into packets having an internal format, wherein the received data packet is converted into a first packet having the internal format. The switch device also typically comprises a second port interface including a means for reconverting packets from the internal format to an IP format, wherein the first packet is converted into a packet having an IP format, and a means for transmitting IP packets to a network, wherein the IP formatted packet is transmitted to an IP network. A means for routing the first packet to the second port interface is also provided.
  • According to yet another aspect of the present invention, a network switch device is provided which typically comprises a first port interface including a means for receiving data packets from an IP network, wherein the first interface means receives a packet in an IP format, and a means for converting received packets into packets having an internal format, wherein the received packet is converted into a first packet having an internal format. The switch device also typically comprises a second port interface including a means for reconverting packets having the internal format to packets having the SCSI format, and a means for transmitting reconverted packets to a SCSI network device. The switch device further typically includes a third port interface having a means for reconverting packets having the internal format to packets having the FC format, and a means for transmitting reconverted packets to a FC network device. A means for routing packets between the first, second and third port interfaces is also typically provided. In operation, wherein if the first packet is routed to the second port interface, the first packet is converted to the SCSI format and transmitted to the SCSI network device, and wherein if the first packet is routed to the third port interface, the first packet is converted to the FC format and transmitted to the FC network device.
  • According to a further aspect of the present invention, a network switch device is provided for use in a storage area network (SAN). The switch device typically comprises a first port interface communicably coupled to a SCSI device, wherein the first port interface converts SCSI formatted data packets received from the SCSI device into data packets having an internal format, and wherein the first port interface converts data packets having the internal format into SCSI formatted data packets. The switch device also typically comprises a second port interface communicably coupled to a FC device, wherein the second port interface converts FC formatted data packets received from the FC device into data packets having the internal format, and wherein the second port interface converts data packets having the internal format into FC formatted data packets. The switch device further typically includes a third port interface communicably coupled to a IP device, wherein the third port interface converts IP formatted data packets received from the IP device into data packets having the internal format, and wherein the third port interface converts data packets having the internal format into IP formatted data packets, and a switch fabric for routing data packets having the internal format between the first, second and third port interfaces. In typical operation, when a first one of the SCSI, FC and IP devices sends a first data packet to a second one of the SCSI, FC and IP devices, the port interface coupled to the first device converts the first data packet to a packet having the internal format and routes the internal format packet through the switch fabric to the port interface coupled to the second device, wherein the port interface coupled to the second device reconverts the internal format packet into the format associated with the second device and sends the reconverted packet to the second device.
  • According to yet a further aspect of the present invention, a network switch device for use in a storage area network (SAN) is provided. The switch may comprise any combination of Fibre Channel, SCSI, Ethernet and Infiniband ports, and may comprise any number of total ports. The switch device typically comprises a first port interface communicably coupled to one of a SCSI device(s), an FC device, or an IP device, a second port interface, wherein the second port interface is configurable to communicate with either a FC device or an Ethernet device, and a switch fabric for routing data packets having the internal format between the first and second port interfaces. In typical operation, when the second port interface is configured to communicate with a FC device, the second port interface converts FC formatted data packets received from the FC device into data packets having an internal format, and wherein the second port interface converts data packets having the internal format received from the switch fabric into FC formatted data packets, and wherein when the second port interface is configured to communicate with an Ethernet device, the second port interface converts Ethernet formatted data packets received from the Ethernet device into data packets having the internal format, and wherein the second port interface converts data packets having the internal format received from the switch fabric into Ethernet formatted data packets. The second port interface can be either self-configurable or user configurable.
  • Reference to the remaining portions of the specification, including the drawings and claims, will realize other features and advantages of the present invention. Further features and advantages of the present invention, as well as the structure and operation of various embodiments of the present invention, are described in detail below with respect to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates an example of a SAN constructed according to the present invention;
  • FIG. 2 is a block diagram of an overview of the Storage over Internet Protocol (SoIP) implementation;
  • FIG. 3 illustrates the required protocol conversion steps between Fibre Channel, SCSI and IP devices in the apparatus switch fabric according to an embodiment of the present invention;
  • FIG. 4 is an overview of the legacy storage protocol conversion method by which the functionality of the invention is achieved;
  • FIG. 5 is a high level switch diagram outlining the basic architecture of the physical apparatus according to an embodiment of the present invention;
  • FIGS. 6 a-c illustrate FCP packet encapsulation according to an embodiment of the present invention;
  • FIG. 7 shows the frame flow for the “session” initialization for Fibre Channel devices connected to an SoIP network;
  • FIGS. 8 and 9 show the flow of data frames for a node login initiated by FC port A of switch 1 to FC Port B of switch 2 located remotely according to an embodiment of the present invention;
  • FIG. 10 shows the routing of Port Login Request and Response frames for local FC ports according to an embodiment of the present invention;
  • FIG. 11 shows an example of the address domains which exist in a network according to one embodiment of the present invention;
  • FIGS. 12 a-d illustrate a network architecture and address tables for a Third Party Command example;
  • FIG. 13 illustrates layer 2 FCP packet encapsulation according to an embodiment of the present invention;
  • FIGS. 14 a-c illustrate examples of UDP Frame demultiplexing according to embodiments of the present invention;
  • FIG. 15 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet according to an embodiment of the present invention;
  • FIG. 16 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet, where two routing blocks are combined into a single block according to an embodiment of the present invention;
  • FIG. 17 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet wherein low-level port interface logic blocks are combined according to an embodiment of the present invention;
  • FIG. 18 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Ethernet using a Field Programmable Gate Array (FPGA) according to an embodiment of the present invention;
  • FIG. 19 shows a block diagram of a common FC/Gigabit Ethernet port combined with a GBIC interface according to an embodiment of the present invention; and
  • FIG. 20 illustrates the architecture of an intelligent network interface card (NIC) according to an embodiment of the present invention.
  • DESCRIPTION OF THE SPECIFIC EMBODIMENTS
  • FIG. 1 illustrates an example of a storage area network (SAN) 10 according to an embodiment of the present invention. As shown, network 10 includes numerous storage devices, such as tape libraries 15, RAID drives 20 and optical drives 25 (e.g., CD, DVD, etc.) and servers 30. The storage devices can be either storage targets (e.g., tape libraries 15, RAID drives 20, etc.) or initiators (e.g., servers 30). Note that a device could be both an initiator and a target. In a preferred embodiment, the invention is implemented in a switching device 35 within network 10. For example, as shown in FIG. 1, each switching device 35 is an “edge” switch which provides the connectivity between nodes (i.e., one or more storage devices) and a network 40. In other words, the switch resides on the “edge” of the network where the devices are located. Each edge switch 35 allows connected storage elements to communicate through the edge switch with no traffic being sent to network 40. Each edge switch 35 also allows storage elements connected to different edge switches to communicate with each other through network 40. In a preferred embodiment, network 40 is an Ethernet network, but other networks may be used, for example, Asynchronous Transfer Mode (ATM)-based or FDDI-based networks, or the like.
  • In one embodiment, a switching device 35 is implemented in an SoIP (Storage over Internet Protocol) storage area network (SAN) as shown in FIG. 2. According to the present invention, SoIP is a framework for transporting SCSI commands and data over IP networks using the Fibre Channel Protocol for SCSI (FCP) for communication between IP networked storage devices. A majority of storage devices currently communicate using either a “parallel” SCSI bus or a Fibre Channel serial interface. FCP is an FC-4 Upper Layer Protocol for sending SCSI commands and data over a Fibre Channel network yielding a “serial” SCSI network. The SoIP framework enables FCP for use on an IP network by defining the SoIP protocol. Storage devices and host bus adapters operating the SoIP protocol form a storage area network (SAN) directly on an IP network. This framework offers an enormous advantage in the installation and utility of SANs.
  • As shown in FIG. 2, each SoIP device 50 converts SCSI commands and data into FCP data frames in FCP block 52. The SoIP protocol layer block 54 then encapsulates these FCP frames in multiple IP packets using either the User Datagram Protocol (UDP) or Transport Control Protocol (TCP). IP port 56 forwards the packet to IP network 60, which routes the IP packets between the devices 50 or to switch 35. IP network 60 is preferably an Ethernet network, but may be based on any IP-compatible media including ATM, FDDI, SONET and the like. The storage name server 65 serves as a database where devices store their own information and retrieve information on other devices in the SoIP network. The SoIP proxy 70 performs protocol conversion between SoIP based on UDP and SoIP based on TCP.
  • Because the majority of storage devices currently use “parallel” SCSI or Fibre Channel protocols, the transition to SoIP-based SANs may be hampered unless such “legacy” devices can be connected to an SoIP network. For these “legacy” devices, a switch as shown in FIG. 3 is provided for connection into an SoIP SAN.
  • FIG. 3 illustrates data exchange between storage devices using a switch 135 according to an embodiment of the present invention. In this embodiment, switch 135 is configured to receive data from different interfaces, each of which has a different data or frame format. SCSI device 105 transmits data using a “parallel” SCSI interface 106, Fibre Channel (FC) device 110 transmits data using Fibre Channel interface in and Ethernet device 115 transmits data using Ethernet interface 116. Switch 135 translates data received from a source port in one of the three different formats into an internal format and transfers the data in the internal format through switch fabric 140 to a destination port. The destination port translates the data back into the native format appropriate for the connection thereto.
  • In this embodiment, each device, e.g., SCSI device 105, FC device no, Ethernet device 115, or generic IP device 120 (e.g., disk drive, tape drive, server), performs storage operations based on the SCSI Command Set. For Fibre Channel device no, the SCSI commands and data are converted to FCP and transmitted using Fibre Channel interface in. For SCSI device 105 the SCSI commands and data are transferred directly using a “parallel” bus 106. In this embodiment, the SCSI port interface 125 of switch 135 acts like a SCSI to FC bridge so that the SCSI port looks like an FC port from the point of view of switch fabric 140. As shown, the SCSI data is preferably converted to FCP, and is not actually transmitted using a Fibre Channel interface. For Ethernet device 115, SCSI commands and data are converted to FCP and then encapsulated in an IP packet using UDP or TCP. The IP packet is then encapsulated in an Ethernet frame and transmitted using Ethernet interface 116. Note that the term “SCSI device” implies a device with a “parallel SCSI bus” while the term “Fibre Channel device” implies a device with a Fibre Channel interface. Both devices operate as SCSI devices at the command level. Note that SCSI device 105 does not convert the SCSI commands and data to an FCP format. Therefore, it is not possible to transfer data between FC device no and SCSI device 105 directly. As shown in FIG. 3, it is possible for all devices connected to switch 135 to exchange data frames because the data format of all interfaces into switch fabric 140 are FCP compatible frames. Also note that it is possible to replace Fibre Channel with another interface. For example, FIG. 3 shows a storage device constructed using Ethernet in the same manner as a device is constructed with FC. Ethernet simply replaced Fibre Channel as the media for transport. Infiniband may also be implemented, for example in generic IP device 120. As is well known, Infiniband is an I/O interface that merges the work of NGIO (Next Generation I/O) and Future I/O.
  • FIG. 4 illustrates data exchange between Fibre Channel, SCSI and IP devices in switch apparatus 135 according to an embodiment of the present invention. The example in FIG. 4 is for an Ethernet based IP network 160, however any other IP networks based on other protocols such as ATM, FDDI, etc. may be used. Similar to the embodiment in FIG. 3, FIG. 4 shows the protocol translations which occur for each device. SCSI device 105 communicates with switch 135 using SCSI commands directly with no encapsulation of data or commands in data frames. FC device no uses the FCP protocol to send SCSI commands and data to switch 135. Switch 135 converts the received data to a common protocol based on FCP to allow the devices to communicate with each other. In addition, switch 135 performs address translation between the Fibre Channel and SCSI addressing schemes to the IP addressing method as will be discussed in more detail below. This is done transparently so that no changes are required in Fibre Channel device no or SCSI device 105, or in any host bus adapters, driver software or application software.
  • FIG. 5 is a high-level switch diagram outlining the basic architecture of a physical switch apparatus 235 according to an embodiment of the present invention. In this embodiment, switch 235 includes three main elements: switch fabric 240, management processor 250 and port interfaces 270. Switch fabric 240 provides a high bandwidth mechanism for transferring data between the various port interfaces 270 as well as between port interfaces 270 and management processor 250. Management processor 250 performs management related functions for switch 235 (e.g. switch initialization, configuration, SNMP, Fibre Channel services, etc.) primarily through management bus 255.
  • Port interfaces 270 convert data packets from the input frame format (e.g., parallel SCSI, FC, or Ethernet) to an internal frame format. The internal frame format data packets are then routed within switch fabric 240 to the appropriate destination port interface. Port interfaces 270 also determine how packets are routed within the switch. The amount of processing performed by each port interface 270 is dependent on the interface type. SCSI ports 270.sub.1 and 270.sub.2 provide the most processing because the SCSI interface is half-duplex and it is not frame oriented. The SCSI port interfaces 270.sub.1 and 270.sub.2 also emulate the functionality of a SCSI host and/or target. Fibre Channel ports 270.sub.3 and 270.sub.4 require the least amount of processing because the internal frame format is most compatible with Fibre Channel. In essence, IP ports 270.sub.5 and 270.sub.6 (e.g., Ethernet ports) and SCSI ports 270.sub.1 and 270.sub.2 convert data received into an internal frame format before sending the packets through switch fabric 240.
  • Because FCP frames are not directly compatible with an Ethernet interface as they are with a Fibre Channel interface, the transmission of FCP packets on an Ethernet interface requires that an FCP frame be encapsulated in an Ethernet frame as shown in FIG. 6 a.
  • FIG. 6 a illustrates FCP packet encapsulation in an IP frame carried over an Ethernet frame according to an embodiment of the present invention. Field Definitions for FIG. 6 a include the following:
  • DA: Ethernet Destination Address (6 bytes).
  • SA: Ethernet Source Address (6 Bytes).
  • TYPE: The Ethernet packet type.
  • CHECKSUM PAD: An optional 2-byte field which may be used to guarantee that the UDP checksum is correct even when a data frame begins transmission before all of the contents are known. The CHECKSUM PAD bit in the SoIP Header indicates if this field is present.
  • ETHERNET CRC: Cyclic Redundancy Check (4 bytes).
  • As shown in FIG. 6 a, the SoIP Header field contain the following parameters:
  • CLASS: This 4-bit field indicates the class of service. In one embodiment, only the values 2 or 3 are used.
  • VERS: This 4-bit field indicates the protocol version of SoIP.
  • SoIP FLAGS: This 8-bit field contains bits that indicate various parameters for a data frame as shown in FIG. 6 b.
  • In FIG. 6 a, the User Datagram Protocol (UDP) Header is the protocol used within the IP packet. TCP may also be used. The UDP header, defined in RFC 768, is 8 bytes in length consisting of four 16-bit fields as shown in FIG. 6 c, with the following field definitions:
  • SOURCE PORT: An optional field. When meaningful, it indicates the port of the sending process, and may be assumed to be the port to which a reply should be addressed in the absence of other information. If not used, a value of zero is inserted.
  • DESTINATION PORT: has a meaning within the context of a particular internet destination address.
  • LENGTH: the length, in bytes, of the user datagram including the UDP header and data (thus, if there were no data in the datagram, the length would be 8). For an encapsulated FCP packet, the UDP Length is the sum of the UDP Header Length, FCP Header length, and FCP Payload length and optionally the checksum pad.
  • CHECKSUM: the 16-bit one's complement of the one's complement sum of a pseudo header of information from the IP header, the UDP header, and the data, padded with bytes of zero at the end (if necessary) to make a multiple of 2 bytes.
  • In one embodiment, a switch 235 encapsulates FC packets into an Ethernet Frame with a “wrapper” around the FC information. The encapsulation of an FCP data frame in an Ethernet packet may require that the FCP data frame be limited in size because the maximum FCP data frame size is 2136 bytes (24 byte header+2112 byte payload) whereas an Ethernet packet has a maximum size of 1518 bytes. The use of Ethernet Jumbo Frames, which permit packet sizes up to 9 Kbytes to be used, eliminates the need to limit the Fibre Channel frame size. However, support for Ethernet jumbo frames is limited within the existing network infrastructure. Therefore, FCP data frames need to be limited otherwise a large FCP data frame may need to be “fragmented” into 2 separate Ethernet frames. The Login procedures defined in the Fibre Channel standard allows devices to negotiate the maximum payload with the switch fabric 240. Thus, the switch fabric 240 can respond to a login with a smaller payload size than the maximum (e.g., 1024 bytes). Switch 235 makes use of this fact to limit FC packets to a size which can be encapsulated in an Ethernet packet to eliminate the need for fragmenting FC packets. According to one embodiment, a node's maximum receive data field size is provided to switch fabric 240 during “Fabric Login” and to each destination node during “Port Login.” The fabric or node being “logged into” generates a login response which indicates the maximum receive data field size for data frames it is capable of receiving. Note that these values may not be the same. For example, a fabric may have the maximum allowed size of 2112 bytes while a node may limit the maximum size to 1024 bytes (e.g. the Hewlett-Packard Tachyon-Lite Fibre Channel Controller). A source node may not transmit a data frame larger than the maximum frame size as determined for the login response.
  • Since an encapsulated FCP data frame cannot be larger than the maximum Ethernet packet size, an upper limit is placed on the frame payload size during login by a device. According to one embodiment, the upper limit value is set by determining or discovering the maximum IP datagram size and subtracting 60 bytes to account for the various headers and trailers. For example, for an Ethernet Frame, the upper limit value equals 1440 bytes. That is, the payload for an FCP Frame cannot exceed 1440 bytes in size. This limit is established because an FCP Frame being transported across an IP network will not be allowed to fragment. Allowing IP datagrams to fragment degrades network performance and so most networks rarely fragment. An IP header's Do Not Fragment Flag can be used to prevent the IP layer from fragmenting the datagram. Even with node login setting an appropriate size for the FCP payload, this bit is set to ensure that fragmentation does not occur. According to one embodiment, the payload is padded to a multiple of 4 bytes to make it easy to convert frames being sent to legacy FC devices.
  • Each switch 235 preferably makes use of the Buffer to Buffer Receive Data Field size to force end nodes to communicate with data frames that will fit within an IP packet carried over an Ethernet link. According to an embodiment of the present invention, one method for enforcing the maximum frame size is to intercept Node Login packet which can be transmitted across an Ethernet network without being fragmented. Therefore, each Management Processor may need to perform MTU (Maximum Transmission Unit) discovery to determine a size which does not result in fragmentation of IP packets in the network.
  • When an FC port performs a Port Login with an FC port which is local (i.e. connected to the same switch), it is not necessary to change the Buffer to Buffer Receive Data Field Size of the Login request or response. This is because, in one embodiment, the switch supports the maximum frame size for transfers between FC ports (on the same switch). However, the FC port interface logic will always redirect the Port Login packets to the switch's Management Processor to simplify the port interface logic. Thus, in this embodiment, the switch looks and acts like an FC switch from the point of view of any FC devices connected thereto. An example of the routing of Port Login Request and Response frames for local FC ports is shown in FIG. 10.
  • According to one embodiment, routing FC Port Login Request/Response packets to the Management Processor allows the Port Login for SCSI ports to be handled by the Management Processor. The Management Processor always handles login for SCSI.
  • According to one embodiment, an SoIP device is uniquely identified using two parameters: an IP address and an SoIP socket number. Therefore, it is possible for a device to have a unique IP address or for multiple devices to share an IP address. For example, all of the devices on a Fibre Channel arbitrated loop may share an IP address while a server Host Bus Adapter may have a dedicated IP address. In one embodiment, there are two possible modes for assignment of the SoIP socket number: local or global.
  • A single SoIP device connected directly to an IP network must have a unique IP address in order for the network to be able to route data frames to the device. An IP network will not route traffic based on the SoIP socket number. However, devices connected to a switch (e.g., switch 235) may share an IP address if the switch uses both the IP address and the SoIP socket number when switching data frames.
  • According to the present invention, an SoIP network SAN with “legacy” Fibre Channel devices attached has different address domains due to the two different address methods used: IP and Fibre Channel. FIG. 11 shows an example of the address domains which exist in a network according to one embodiment of the present invention. SoIP devices communicate using IP addresses and the SoIP socket numbers while the Fibre Channel devices (SCSI devices are treated as Fibre Channel devices by a switch) use Fibre Channel addresses. Each switch 235 performs address translation between the IP and Fibre Channel address domains. Switch 235.sub.1 performs address translation between the IP address domain and FC address domain 1, and Switch 235.sub.2 performs address translation between the IP address domain and FC address domain 2. Each switch 235 assigns an IP address, SoIP socket number and Fibre Channel address to each Fibre Channel device when the device performs a fabric login. A Fibre Channel device only learns about its assigned Fibre Channel address. The assigned IP address, SoIP socket number and Fibre Channel Address are maintained within a translation table (not shown) in the switch. Parallel SCSI devices are assigned their addresses by the switch during initialization of the SCSI port. The Fibre Channel ports direct all Name server requests by a Fibre Channel device to the management processor for processing.
  • According to one embodiment of the present invention, the management processor converts Fibre Channel Name Server requests into SoIP Name Server requests that are then forwarded to the SoIP Name Server, e.g., implemented in server 280. In one embodiment, the SoIP name server functionality is distributed and thus handled directly by the management processor. Responses from the name server are returned to the management processor where they are converted into Fibre Channel Name Server responses before being forwarded to the port that originated the name server request. When a Fibre Channel device sends data frames to a device not located in its Fibre Channel address domain, switch 235 converts the packet into an SoIP compatible packet. The conversion encapsulates the FCP data frame in an IP data frame as described above. Referring back to FIG. 6 a, in one embodiment, the IP addresses and SoIP socket numbers are derived by using the Fibre Channel source address (S_ID) and the destination address (D_ID) as “keys” into the IP/Fibre Channel address conversion table on the name server. The Fibre Channel address fields are replaced by the SoIP socket numbers when translating a Fibre Channel data frame to an SoIP data frame. The packet is then transmitted on the IP network and routed using the destination IP address. If the destination device is an SoIP compatible device, the packet is processed directly (i.e., de-encapsulated and processed as an FCP packet) by the destination device. However, if the destination is a Fibre Channel (or parallel SCSI) device, the packet is routed to a switch 235, which receives the packet, de-encapsulates the SoIP packet and replaces the SoIP socket numbers with the appropriate source and destination Fibre Channel addresses based on the source and destination IP addresses and SoIP socket numbers.
  • According to one embodiment, local assignment is the preferred method for assigning SoIP socket numbers. In this embodiment, native SoIP devices select their SoIP socket numbers while an SoIP switch (e.g., switch 235) assigns the SoIP socket number for Fibre Channel and SCSI devices attached to the switch. When the SoIP socket number is assigned locally, the value chosen may be any value that results in a unique IP Address/SoIP socket number combination. Devices that share an IP address must be assigned unique SoIP socket numbers in order to create a unique IP Address/SoIP socket number pair. Devices that have a unique IP address may have any desired SoIP socket number. In one embodiment, an SoIP switch assigns the SoIP socket numbers in such a manner as to simplify the routing of received data frames. A switch must also assign a locally significant Fibre Channel address to each “remote” device for use by the local devices in addressing the “remote” devices. These locally assigned addresses are only known by a switch within its Fibre Channel address domain. Thus each switch maintains a set of locally assigned Fibre Channel addresses which correspond to the globally known IP Address/SoIP Port Number pairs defined in the SoIP Name Server.
  • According to one embodiment, due to the different address domains, each switch 235 intercepts Fibre Channel Extended Link Service requests and responses which have Fibre Channel address information embedded in the payload. Extended Link Service requests and responses are generated infrequently. Therefore, it is acceptable to redirect the Extended Link Service requests to the switch's management processor which makes any necessary changes to the data frame. If an Extended Link Service request/response has no addressing information embedded in the payload, the Management Processor simply retransmits the packet with no modifications.
  • The IP Address and SoIP socket number assigned to a Fibre Channel or SCSI device are determined by the switch. The assignment of these addresses is implementation dependent. In a preferred embodiment, the SoIP socket number is assigned the device's local Fibre Channel address. In this embodiment, the switch obtains the local Fibre Channel address directly from the received data frame. Alternatively, assignment of the SoIP socket number is based on an incrementing number that can be used as an index into an address table.
  • In one embodiment, each device is assigned a unique IP address. However, this type of assignment may result in the use of a large number of IP addresses. The use of a single IP address for each device also has implications for routing in the IP network. Therefore, in a preferred embodiment, IP addresses are assigned such that at least a subset of a switch's attached devices share an IP address. For example, an IP address can be assigned to each switch port. Each device attached to that switch port then shares the port's IP address. Thus, an attached Fibre Channel N_Port would have a unique IP address while the devices on a Fibre Channel arbitrated loop attached thereto would share an IP address.
  • According to one embodiment, Fibre Channel addresses are assigned globally. Globally assigned Fibre Channel addresses provide the maximum compatibility for “legacy” Fibre Channel devices. In this embodiment, the SoIP name server is responsible for managing the allocation of Global Fibre Channel Addresses. A global Fibre Channel address space may need to be supported because in some cases Fibre Channel addresses may be embedded within “third-party” SCSI commands. An example of such a third-party command is COPY. The COPY command instructs another device to copy data. The use of “third-party” commands is rare but when used, either the command would need to be modified for address compatibility or the Fibre Channel addresses would need to be globally assigned.
  • With reference to the SoIP network shown in FIG. 12 a, an example third party COPY command will be used to illustrate a problem that occurs with locally assigned Fibre Channel addresses and third-party commands. In this example locally assigned Fibre Channel Addresses are also used as the SoIP socket number. Each device has a unique IP address in this example.
  • FIG. 12 b shows the IP Address and SoIP socket number each device has advertised to the Name Server which identifies how the device is addressed within the SoIP network. Each device is uniquely identified by the combination of IP Address and SoIP socket number. Assume that the switches 235.sub.3 and 235.sub.4 and Tape Library C are aware of every device in the system. Tape Library C would then have an address table that is the same as the name server's address table. Switches 235.sub.3 and 235.sub.4 will have assigned local Fibre Channel addresses to each device. FIG. 12C illustrates the address table stored on switch 235.sub.3 and FIG. 12 d illustrates the address table stored on switch 235.sub.4. Because the Fibre Channel addresses are assigned locally, the address assignment is purely arbitrary.
  • Assume that Server A in local domain 1 sends a COPY command to Server B in local domain 3 indicating that data is to be copied from RAID drive B to Tape Library B, both of which are located in local domain 3. The COPY command will contain the addresses from Server A's perspective. Therefore, referring to FIG. 12C, the command received by Server B is COPY from Fibre Channel device 000500 (RAID drive B) to Fibre Channel device 000600 (Tape Library B). However, Server B will interpret the COPY command using the address table of switch 235.sub.4 (FIG. 12 d) and assume it should copy data from RAID drive A to Tape Library A and not RAID drive B to Tape Library B. Thus, the wrong operation will be performed. As another example, assume that Server B sends a command to Server A to copy from RAID drive A to Tape Library C. The command will be COPY from Fibre Channel address 000500 to 009900 (the addresses are from the perspective of switch 235.sub.4). Server A will assume the command is to copy data from RAID drive B to a nonexistent device because 009900 is not in the address table of switch 235.sub.4.
  • According to one embodiment, the switch gets around this problem by intercepting each third party command and modifying the embedded Fibre Channel addresses to be compatible with the destination device. However, this requires that the source switch know the assignment of local addresses in the destination switch. While it is possible for a switch to convert the third-party commands, alternative methods are preferred.
  • According to one alternative method, Fibre Channel addresses are globally assigned for devices that are referenced by Fibre Channel address in third-party commands. The use of a Global Fibre Channel address allows third-party commands to be used with no modification, but sets the total number of devices possible in an SoIP network to the same maximum as a Fibre Channel network. Only those devices that are referenced in a third-party command require a global address, although all devices within an SoIP network can be assigned global addresses.
  • A Globally Assigned Fibre Channel address is preferably used as the device's SoIP socket number. This simplifies the conversion of “legacy” Fibre Channel data frames to SoIP compatible data frames. Therefore, globally assigning Fibre Channel addresses is equivalent to globally assigning SoIP socket numbers.
  • Global SoIP socket number allocation is managed by the SoIP Name Server, which allocates Global SoIP socket numbers as requested from a pool of free socket numbers, and deallocates socket numbers (returns them to the free pool) when they are no longer used. The assignment of Global SoIP socket numbers for all devices in an SoIP network is the simplest solution from a management standpoint because it does not require specifying the subset of devices that require a Global SoIP socket number (or alternatively, the devices that can use a local SoIP socket number).
  • Thus, all devices in an SoIP network either have a locally assigned SoIP socket number or a globally assigned SoIP socket number. All SoIP compatible devices and switches support both modes. Each device or switch determines from the SoIP Name Server which mode is to be supported when it logs into the network. An SoIP Name Server configuration parameter indicates the SoIP socket number allocation mode.
  • An environment that supports both local and global SoIP socket numbers is not required because it is expected that the need for global SoIP socket numbers will be eliminated due to a new form of Third-Party command format, which embeds World Wide Names in the command instead of the Fibre Channel address. Because World Wide Names are unique, the device receiving the command is able to determine the appropriate address(es) to use from its point of view. One implementation of this new third-party command is the EXTENDED COPY command. Native SoIP devices preferably use the version of third-party commands that embed World Wide Names in the command when SoIP socket numbers are locally assigned.
  • In one embodiment, when SoIP socket numbers are assigned globally, the requester indicates the minimum number of socket numbers requested and a 24-bit mask defining the boundary. For example, a 16-port switch may request 4096 socket numbers with a bit mask of FFFooo (hex) indicating that the socket numbers should be allocated on a boundary where the lower 12 bits are 0. The switch would then allocate 256 socket numbers to each port (for support of an arbitrated loop). Allocation of socket numbers on a specified boundary allows the switch to allocate socket numbers that directly correlate to port numbers. In the above example, bits 11:8 would identify the port. Native SoIP devices preferably allocate only one global SoIP socket number from the SoIP Name Server.
  • In one embodiment, the SoIP Name Server also includes a configuration parameter that selects “Maximum Fibre Channel Compatibility” mode which only has meaning for Global assignment of SoIP socket numbers. Devices are able to query the Name server for the value of this parameter. When enabled, this mode specifies that global SoIP socket numbers are to be allocated in blocks of 65536 (on boundaries of 65536) to switches. This mode is compatible with the existing Fibre Channel modes of address allocation where the lower 8 bits identify the device, the middle 8 bits identify the port and the upper 8 bits identify the switch. SoIP switches check for this mode and, if enabled, request 65536 socket numbers when requesting global SoIP socket numbers. In this mode, Native SoIP devices preferably allocate only one global SoIP socket number from the SoIP Name Server.
  • According to one embodiment, when operating in a Layer 2 network (e.g., no IP routers), the frame format is modified to simplify the encapsulation logic. A Layer 2 network does not require the IP Header or the UDP header. All frames are forwarded using the physical address (e.g. Ethernet MAC address). A switch then routes frames internally based on the Layer 2 physical address (e.g. Ethernet MAC address) combined with the SoIP socket number. In essence, the Layer 2 physical address replaces the IP address as a parameter in uniquely identifying an SoIP device. FIG. 13 shows the frame format for an FCP frame transmitted on Ethernet. An Ethernet Type value 290 is defined specifically for SoIP to allow a station receiving the frame to distinguish the frame from other frame types (e.g., IP). The IP and UDP headers have been removed which reduces the frame overhead. An advantage is that the length and checksum fields in the UDP header no longer need to be generated. The generation of the IP and UDP headers introduces additional latency for the frame transmission because the length and checksum are located at the beginning of the frame. Therefore, it is necessary to buffer the entire frame to determine the length and checksum and write them into the header. For an Ethernet Layer 2 SoIP frame, it is only necessary to determine the amount of padding, if any, added at the end of the frame. The number of PAD bytes must be included in the SoIP Header to allow the PAD bytes to be removed at the receiving station. Since the padding is only required to satisfy a minimum Ethernet frame size of 64 bytes, it is possible to complete the header generation after 64 bytes of the frame (or the entire frame) have been received.
  • The Layer 2 frame format is similar to the Layer 3 frame format SoIP Frame conversion described above with reference to FIG. 6 with the following differences:
  • a. The IP and UDP headers are no longer present.
  • b. The Ethernet Type value is different.
  • c. The CHEKSUM PAD field is replaced by the FC CRC field. The FC CRC field is a 4-byte field containing the Fibre Channel CRC calculated over the FCP header and payload. This field may be inserted by a source when a Fibre Channel data frame is encapsulated with no changes. Thus, the CRC received with the frame is still valid.
  • d. The CHECKSUM PAD flag is replaced by the FC CRC PRESENT flag. This bit indicates if the FC CRC field is present in the frame. Note that the CHECKSUM PAD field has no meaning since there is no need to calculate a UDP checksum.
  • e. The FRAME PAD LENGTH may have a non-zero value since the encapsulated frame length may be less than the Ethernet minimum of 64.
  • The UDP Header contains a Destination Port field and a Source Port field. The normal usage of these fields is to identify the software applications that are communicating with each other. An application requests a port number for use when sending a UDP “datagram”. This port number becomes the source port number for each UDP datagram sent by the application. When a UDP datagram is received, the destination port number is used by the UDP layer to determine the application to which the datagram will be forwarded. FIG. 14 a illustrates “demultiplexing” of UDP datagrams as is typical in the industry.
  • FIGS. 14 b and 14 c illustrate ways to add an SoIP layer according to embodiments of the present invention. FIG. 14 b illustrates frame demultiplexing when there is a single port number assigned to all SoIP devices. Further demultiplexing is then performed using the SoIP socket number to determine the device. Routing data frames to applications is then performed based on the FCP exchange numbers located in the FCP header. FIG. 14 c illustrates a similar example, but with separate UDP port numbers assigned to each SoIP device. In this case, it is not necessary to examine the SoIP socket number in order to forward the UDP datagram. (The SoIP socket number and IP address must still uniquely identify the device). The choice of whether to use a single UDP port number for each SoIP device or one UDP port number for all devices is implementation dependent.
  • The UDP demultiplexing examples illustrated in FIGS. 14 b and 14 c are oriented toward a server with one or more host bus adapters (where the host bus adapters are the SoIP devices). A switch is generally less complicated in the sense that data frames are forwarded to end devices and the application layer does not have to be handled.
  • The addressing mechanisms described above allow software applications to appear as SoIP devices by registering with the name server using a different address. This opens up the possibility for applications to advertise themselves in the name server for use by other applications. An example is a COPY manager that could be used by a higher level backup application.
  • According to one embodiment, each storage device, when it registers with the name server, must include the UDP port number to use when sending data frames to the device. In a normal UDP application, the destination port would save the source port number for use in sending a reply. However, this mechanism is not feasible for use with “legacy” FC switches since it requires the switch to associate the source port numbers with the exchange ID's. It is much simpler to require a storage device to always use the same UDP port number.
  • As a result, according to this embodiment, a storage device is identified by 3 parameters in the name server database: IP Address, UDP Port Number, and SoIP socket number. An additional parameter required is the physical address (e.g. Ethernet MAC address) which is determined in the normal manner for IP networks. ARP (address resolution protocol) is preferably used to learn the physical address to use for an IP address. The physical address to use can also be learned when a frame is received from a device. For example, the physical address can be learned when a Port Login request is received. The physical address may not be the physical address of the actual device but the address of an IP router.
  • The SoIP Name Server (SNS) must have a UDP Port number that is known by all of the SoIP devices within an SoIP network since the port number cannot be learned from another source. This could be a “well-known” port number or a registered port number. This approach is similar to a Domain Name Server (DNS) that has a well-known port number of 53. The assignment of “well-known” port numbers is done by the IANA (Internet Assigned Numbers Authority).
  • Routing within an IP network is affected by the choice of addressing mode which impacts the ability of switches and routers to determine what constitutes a “conversation”. A conversation is a set of data frames that are related and which should arrive in order. However, it is assumed that conversations have no ordering relationship. In other words, the ordering of frames from different conversations can be changed with no effect. For example, assume that frames for 3 conversations (A, B and C) are transmitted in the following order (A1 sent first):
  • A1 A2 B1 B2 B3 A3 B4 A4 A5 A6 A7 B5 B6 B7 C1 C2 C3 A8.
  • It is permissible for the frames to be received in any of the following sequences (note that there are many more possible sequences that are acceptable): A1 A2 A3 A4 A5 A6 A7 A8 B1 B2 B3 B4 B5 B6 B7 C1 C2 C3; A1 A2 A3 A4 A5 A6 C1 C2 B1 B2 B3 B4 B5 B6 B7 C3 A7 A8; and C1 C2 A1 C3 A2 B1 B2 A3 A4 A5 A6 B3 B4 B5 A7 A8 B6 B7.
  • In each of the above sequences, the frames for a particular conversation arrive in order with respect to each other, but out of order with respect to frames from other conversations. The ability to identify different conversations allows load balancing to be performed by allowing traffic to be routed on a conversation basis. Switches and routers can determine conversations based on several parameters within a data frame including Destination/Source addresses, IP Protocol, UDP/TCP Port Numbers, etc. The parameters actually used are dependent on the switch/router implementation.
  • Storage traffic between the same two devices should be treated as a single conversation. It is not acceptable for storage commands to be received out of order because there may be a relationship between the commands (e.g. ordered queuing). Therefore, it is preferable to select an addressing mechanism that makes a device unique to a switch/router but does not attempt to distinguish commands. Different IP addresses are an ideal choice for distinguishing devices since this method works with all switches and routers. When an IP address is shared, it is preferred that the UDP Port Numbers be unique for the devices sharing the IP address. Thus, devices that share an IP address have the possibility to be treated separately by switches and routers that classify conversations based on UDP port numbers. It is understood that the discussion of UDP Port Numbers above also applies to TCP Header Port Numbers when SoIP is implemented using TCP instead of UDP.
  • FIG. 15 is a high level block diagram which illustrates the basic architecture for a switch port that supports both Fibre Channel and Gigabit Ethernet according to an embodiment of the present invention. The Fibre Channel and Gigabit Ethernet ports use the same encoding/decoding method (8B/10B) with each port requiring a serializer/deserializer (SERDES) block for converting to/from the high speed serial interface. Therefore, these two interfaces share the 8B/10B block 310 and SERDES block 315 in this embodiment as shown in FIG. 15. These two interface types differ in clock speed with Fibre Channel operating at 1062.5 MHz and Gigabit Ethernet operating at 1250 MHz. Higher speed versions of these interfaces are being developed which will also have a different clock speed. Therefore, a multiplexer 345 selects the clock used by the logic based on the port type. In addition, these two interfaces share the switch fabric interface logic block 320 which interfaces with the switch fabric (including the management interface). The MAC blocks (blocks 325 and 330) implement the appropriate protocol state machines for the interface (Fibre Channel or Gigabit Ethernet). The MAC blocks 325 and 330 convert received data into frames which are forwarded to the routing logic blocks 335 and 340, respectively. The MAC blocks 325 and 330 also receive data frames from the routing logic blocks 335 and 340, respectively, which are then transmitted according to the interface's (Fibre Channel or Gigabit Ethernet) protocol. Routing logic blocks 335 and 340 determine where each received frame should be routed based on addressing information within the frame. Routing logic blocks 335 and 340 also perform any modifications to the frames that are required. For example, a routing logic block will remove the SoIP encapsulation from a frame being forwarded to a Fibre Channel port. The routing logic block then sends the frame to the switch fabric with an indication of the destination output ports. Egress data frames (frames from the switch fabric to the output port) are received by a routing logic block and forwarded to the associated MAC. Additional processing may be performed on the frame by the routing logic block before the MAC receives the frame. For example, Ethernet port routing logic block 340 may convert a Fibre Channel frame into an SoIP frame.
  • According to another embodiment of the present invention as shown in FIG. 16, the two routing blocks of FIG. 15 are combined into a single routing logic block 350. This optimization is possible because the routing logic used by these two interfaces is very similar. In one embodiment, routing logic block 350 includes logic blocks which are dependent on the port type and other blocks that are common to both port types. This optimization reduces the number of logic gates required on an ASIC. Routing block 350 determines where a frame is routed based on addressing information within the data frame. This function is known as address resolution and is performed for both Fibre Channel and Gigabit Ethernet data frames. Therefore, address resolution logic can be shared by these two port interfaces though it is necessary for the routing logic to select different data based on the port type. The logic within Routing Logic block 350 can be implemented as hard coded logic or as a programmable method using a network processor, which is designed specifically for processing packets and which can be programmed to route either Fibre Channel frames or Ethernet frames. Therefore, the routing logic hardware can be shared by using different network processor software. In one embodiment, routing logic block 350 also includes an input and output FIFO memory which is shared by the two port interfaces. Additional logic which can be shared include statistics registers and control registers. Statistics registers are used to count the number of frames received, frames transmitted, bytes received, bytes transmitted, etc. A common set of statistics registers can be used. These registers are modified by control signals from each MAC. Control registers determine the operating mode of each MAC. A common set of statistics and control registers reduces the logic required to implement the registers and for interfacing with an external control source such as a switch management CPU.
  • In another embodiment as shown in FIG. 17, the low-level port interface logic (e.g., FC MAC block 325 and Ethernet MAC block 330) is combined into a single MAC block 360. One problem with this approach, however, is that these two logic blocks have little in common. In addition, it is possible to purchase proprietary blocks which implement Gigabit Ethernet MAC and Fibre Channel Port Interface logic. Combining these two blocks would severely hinder the use of these proprietary blocks.
  • According to another embodiment of the present invention as shown in FIG. 18, a Field Programmable Gate Array (FPGA) 370 is used to select the interface protocol supported by the port. The FPGA configuration loaded would be based on the port type. In this embodiment, separate FPGA code is developed for the Fibre Channel and Gigabit Ethernet interfaces. Thus, the FPGA logic can be optimized for the particular interface. A single hardware design supports both interfaces, with software determining the FPGA code to be downloaded based on the port type.
  • A common port must also deal with the physical interface external to an ASIC. As is well known, such an interface may include, for example, a copper, multi-mode fiber or single-mode fiber interface. Also, the components are not necessarily the same between Fibre Channel and Ethernet. According to an embodiment of the present invention as shown in FIG. 19, a Gigabit Interface Converter (GBIC) 380 is provided to allow a user to select the desired physical interface. A GBIC is a standardized module which has a common form factor and electrical interface and allows any of the many physical interfaces to be installed. GBIC modules are available from many vendors (e.g. HP, AMP, Molex, etc.) and support all of the standard Fibre Channel and Gigabit Ethernet physical interfaces. FIG. 19 shows a block diagram of a common FC/Gigabit Ethernet port interface (e.g., as shown in FIGS. 15, 16, 17 and 18) combined with a GBIC interface according to this embodiment. The ASIC connects to a GBIC connector 385 which allows the user to change GBIC modules. Thus, the user can select the media type by installing the appropriate GBIC 380.
  • GBIC modules typically contain a serial EEROM whose contents can be read to determine the type of module (e.g. Fibre Channel, Gigabit Ethernet, Infiniband, Copper, Multi-mode, Single-mode, etc.). The GBIC can thus indicate the type of interface, e.g., FC or GE or Infiniband, to use. However, it is possible for the GBIC to support multiple interfaces, for example both FC and GE. Therefore, in one embodiment, the port interface type is user switchable/configurable, and in another embodiment the type of the link interface is automatically determined through added intelligence, for example, through a “handshake”.
  • According to another embodiment of the present invention, an SoIP intelligent network interface card (NIC) 400 is provided as shown in FIG. 20. NIC card 400 is able to send and receive both IP and SoIP traffic. In either case, NIC card 400 has the intelligence to determine the type of traffic and direct it accordingly.
  • The host 410 may issue both storage commands and network commands to NIC card 400 through the PCI interface 420. These commands are sent with a specified address which is used to direct the commands to either the Direct Path or the Storage Traffic Engine. Storage commands are issued via the SCSI Command Set, and Network commands are issued via Winsock and/or TCP/IP.
  • NIC card 400 directs storage commands to the Storage Traffic Engine 430 based on the specified address. Storage Traffic Engine 430 handles the exchange management and sequence management for the duration of the SCSI operation. SCSI operations are then carried out via SoIP and transmitted to the network 470 via a media access controller (MAC) block 450, which in one embodiment is a Gigabit Ethernet MAC. NIC card 400 directs non-SoIP traffic to the Direct Path 440 based on the specified address. The Direct Path 440 processes the commands and transmits the specified packets to network 470 via block 450. When receiving data from network 470 via MAC 450, NIC 400 demultiplexes the traffic and directs it accordingly. Storage traffic received as SoIP is sent to storage traffic block 430. Non-SoIP traffic is sent directly to the host via direct path 440.
  • The multiplexer block 460 handles arbitration for the output path when both Direct Path 440 and Storage Traffic Engine 430 simultaneously send traffic to MAC 450. For traffic received from network 470 by MAC 450, Mux block 46 o demultiplexes the traffic and sends it accordingly to either Direct Path 440 or Storage Traffic Engine 430.
  • While the invention has been described by way of example and in terms of the specific embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (10)

What is claimed is:
1. A network device having one or more Fibre Channel (FC) ports and one or more IP ports, wherein the device is configured to encapsulate frames received from a first FC storage area network (SAN) portion on a first one of said FC ports in one or more IP datagrams using TCP as a transport protocol for delivery to a destination device on a second FC SAN portion via a first one of said IP ports, each received frame having a source FC address and a destination FC address.
2. The device of claim 1, wherein the device is configured to encapsulate each received frame in one or more IP datagrams, each datagram including a source IP address, a destination IP address, a destination TCP port number and a source TCP port number based on the destination and source FC addresses In the frame being encapsulated.
3. The device of claim 1, wherein the device is configured to de-encapsulate an FC frame received in an IP datagram from the second FC SAN portion over the first IP port, wherein each de-encapsulated FC frame has a source FC address and a destination FC address.
4. The device of claim 3, wherein the address information includes a source IP address, a destination IP address, a destination TCP port number and a source TCP port number.
5. The device of claim 1, wherein the device is a network switch device coupling the first FC SAN portion to an IP network.
6. A method of transmitting data in a network device having one or more Fibre Channel (FC) ports and one or more IP ports, the method comprising:
receiving, on a first one of said FC ports, a frame from a source device in a first FC storage area network (SAN) portion, the frame having a source FC address and a destination FC address;
encapsulating the frame in one or more IP datagrams using TCP as a transport protocol; and
transmitting the one or more IP datagrams to a second FC SAN portion via a first one of the IP ports, the IP datagrams being destined for a destination device on the second FC SAN portion.
7. The method of claim 6, wherein encapsulating includes:
inserting in each datagram a source IP address and a source TCP port number based on the source FC address in the frame; and
inserting in each datagram a destination IP address and a destination TCP port number based on the destination FC address in the frame.
8. The method of claim 6, wherein the network device is a network switch device coupling the first FC SAN portion to an IP network.
9. A method of transmitting data in a network device having one or more Fibre Channel (FC) ports and one or more IP ports, the method comprising:
receiving, on a first one of said IP ports, an IP datagram from a source device in a second FC storage area network (SAN) portion, the datagram using TCP as a transport protocol, the datagram having an encapsulated frame with a source FC address and a destination FC address;
de-encapsulating the encapsulated frame; and transmitting the de-encapsulated frame to a first FC SAN portion via a first one of the FC ports, the de-encapsulated frame being destined for a destination device on the first FC-SAN portion.
10. The method of claim 9, wherein the address information includes a source IP address, a destination IP address, a destination TCP port number and a source TCP port number.
US13/284,309 1999-03-10 2011-10-28 Method and apparatus for transferring data between ip network devices and scsi and fibre channel devices over an ip network Abandoned US20120039341A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/284,309 US20120039341A1 (en) 1999-03-10 2011-10-28 Method and apparatus for transferring data between ip network devices and scsi and fibre channel devices over an ip network

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US12360699P 1999-03-10 1999-03-10
US09/500,119 US6400730B1 (en) 1999-03-10 2000-02-08 Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US10/138,029 US7197047B2 (en) 1999-03-10 2002-04-30 Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US11/691,320 US20070286233A1 (en) 1999-03-10 2007-03-26 Method and Apparatus for Transferring Data Between IP Network Devices and SCSI and Fibre Channel Devices Over an IP Network
US13/284,309 US20120039341A1 (en) 1999-03-10 2011-10-28 Method and apparatus for transferring data between ip network devices and scsi and fibre channel devices over an ip network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/691,320 Continuation US20070286233A1 (en) 1999-03-10 2007-03-26 Method and Apparatus for Transferring Data Between IP Network Devices and SCSI and Fibre Channel Devices Over an IP Network

Publications (1)

Publication Number Publication Date
US20120039341A1 true US20120039341A1 (en) 2012-02-16

Family

ID=26821713

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/500,119 Expired - Lifetime US6400730B1 (en) 1999-03-10 2000-02-08 Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US10/138,029 Expired - Lifetime US7197047B2 (en) 1999-03-10 2002-04-30 Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US11/691,320 Abandoned US20070286233A1 (en) 1999-03-10 2007-03-26 Method and Apparatus for Transferring Data Between IP Network Devices and SCSI and Fibre Channel Devices Over an IP Network
US13/284,309 Abandoned US20120039341A1 (en) 1999-03-10 2011-10-28 Method and apparatus for transferring data between ip network devices and scsi and fibre channel devices over an ip network

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/500,119 Expired - Lifetime US6400730B1 (en) 1999-03-10 2000-02-08 Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US10/138,029 Expired - Lifetime US7197047B2 (en) 1999-03-10 2002-04-30 Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US11/691,320 Abandoned US20070286233A1 (en) 1999-03-10 2007-03-26 Method and Apparatus for Transferring Data Between IP Network Devices and SCSI and Fibre Channel Devices Over an IP Network

Country Status (1)

Country Link
US (4) US6400730B1 (en)

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309924A1 (en) * 2007-12-20 2010-12-09 Neil Harrison Client/server adaptation scheme for communications traffic
US8886981B1 (en) 2010-09-15 2014-11-11 F5 Networks, Inc. Systems and methods for idle driven scheduling
US9077554B1 (en) 2000-03-21 2015-07-07 F5 Networks, Inc. Simplified method for processing multiple connections from the same client
US9106985B2 (en) 2013-01-20 2015-08-11 International Business Machines Corporation Networking device port multiplexing
US9141625B1 (en) 2010-06-22 2015-09-22 F5 Networks, Inc. Methods for preserving flow state during virtual machine migration and devices thereof
US9172753B1 (en) 2012-02-20 2015-10-27 F5 Networks, Inc. Methods for optimizing HTTP header based authentication and devices thereof
US9231879B1 (en) 2012-02-20 2016-01-05 F5 Networks, Inc. Methods for policy-based network traffic queue management and devices thereof
US9246819B1 (en) * 2011-06-20 2016-01-26 F5 Networks, Inc. System and method for performing message-based load balancing
US9270766B2 (en) 2011-12-30 2016-02-23 F5 Networks, Inc. Methods for identifying network traffic characteristics to correlate and manage one or more subsequent flows and devices thereof
US9554276B2 (en) 2010-10-29 2017-01-24 F5 Networks, Inc. System and method for on the fly protocol conversion in obtaining policy enforcement information
US9647954B2 (en) 2000-03-21 2017-05-09 F5 Networks, Inc. Method and system for optimizing a network by independently scaling control segments and data flow
US20170317923A1 (en) * 2014-11-05 2017-11-02 Bull Sas Method for quick reconfiguration of routing in the event of a fault in a port of a switch
CN107864099A (en) * 2017-10-23 2018-03-30 中国科学院空间应用工程与技术中心 A kind of flow control methods and system of isomery FC networks
US9942134B2 (en) 2015-09-30 2018-04-10 International Business Machines Corporation Holding of a link in an optical interface by a lower level processor until authorization is received from an upper level processor
US10015286B1 (en) 2010-06-23 2018-07-03 F5 Networks, Inc. System and method for proxying HTTP single sign on across network domains
US10015143B1 (en) 2014-06-05 2018-07-03 F5 Networks, Inc. Methods for securing one or more license entitlement grants and devices thereof
USRE47019E1 (en) 2010-07-14 2018-08-28 F5 Networks, Inc. Methods for DNSSEC proxying and deployment amelioration and systems thereof
US10097616B2 (en) 2012-04-27 2018-10-09 F5 Networks, Inc. Methods for optimizing service of content requests and devices thereof
US10122630B1 (en) 2014-08-15 2018-11-06 F5 Networks, Inc. Methods for network traffic presteering and devices thereof
US10135831B2 (en) 2011-01-28 2018-11-20 F5 Networks, Inc. System and method for combining an access control system with a traffic management system
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US10187317B1 (en) 2013-11-15 2019-01-22 F5 Networks, Inc. Methods for traffic rate control and devices thereof
US10230566B1 (en) 2012-02-17 2019-03-12 F5 Networks, Inc. Methods for dynamically constructing a service principal name and devices thereof
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof
US10447765B2 (en) 2017-07-13 2019-10-15 International Business Machines Corporation Shared memory device
US10505818B1 (en) 2015-05-05 2019-12-10 F5 Networks. Inc. Methods for analyzing and load balancing based on server health and devices thereof
US10505792B1 (en) 2016-11-02 2019-12-10 F5 Networks, Inc. Methods for facilitating network traffic analytics and devices thereof
US10721269B1 (en) 2009-11-06 2020-07-21 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US10791088B1 (en) 2016-06-17 2020-09-29 F5 Networks, Inc. Methods for disaggregating subscribers via DHCP address translation and devices thereof
US10797888B1 (en) 2016-01-20 2020-10-06 F5 Networks, Inc. Methods for secured SCEP enrollment for client devices and devices thereof
US10812266B1 (en) 2017-03-17 2020-10-20 F5 Networks, Inc. Methods for managing security tokens based on security violations and devices thereof
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US10972453B1 (en) 2017-05-03 2021-04-06 F5 Networks, Inc. Methods for token refreshment based on single sign-on (SSO) for federated identity environments and devices thereof
US11063758B1 (en) 2016-11-01 2021-07-13 F5 Networks, Inc. Methods for facilitating cipher selection and devices thereof
US11122083B1 (en) 2017-09-08 2021-09-14 F5 Networks, Inc. Methods for managing network connections based on DNS data and network policies and devices thereof
US11122042B1 (en) 2017-05-12 2021-09-14 F5 Networks, Inc. Methods for dynamically managing user access control and devices thereof
US11178150B1 (en) 2016-01-20 2021-11-16 F5 Networks, Inc. Methods for enforcing access control list based on managed application and devices thereof
US11343237B1 (en) 2017-05-12 2022-05-24 F5, Inc. Methods for managing a federated identity environment using security and access control data and devices thereof
US11350254B1 (en) 2015-05-05 2022-05-31 F5, Inc. Methods for enforcing compliance policies and devices thereof
US20220239765A1 (en) * 2021-01-27 2022-07-28 EMC IP Holding Company LLC Singular control path for mainframe storage
US11757946B1 (en) 2015-12-22 2023-09-12 F5, Inc. Methods for analyzing network traffic and enforcing network policies and devices thereof
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof

Families Citing this family (386)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6185203B1 (en) * 1997-02-18 2001-02-06 Vixel Corporation Fibre channel switching fabric
US6118776A (en) 1997-02-18 2000-09-12 Vixel Corporation Methods and apparatus for fiber channel interconnection of private loop devices
US7266706B2 (en) 1999-03-03 2007-09-04 Yottayotta, Inc. Methods and systems for implementing shared disk array management functions
US6690682B1 (en) * 1999-03-12 2004-02-10 Lucent Technologies Inc. Bit multiplexing of packet-based channels
US7295554B1 (en) * 1999-03-12 2007-11-13 Lucent Technologies Inc. Word Multiplexing of encoded signals into a higher bit rate serial data stream
US6718139B1 (en) * 1999-09-13 2004-04-06 Ciena Corporation Optical fiber ring communication system
US6650803B1 (en) * 1999-11-02 2003-11-18 Xros, Inc. Method and apparatus for optical to electrical to optical conversion in an optical cross-connect switch
US6597826B1 (en) 1999-11-02 2003-07-22 Xros, Inc. Optical cross-connect switching system with bridging, test access and redundancy
US6598088B1 (en) * 1999-12-30 2003-07-22 Nortel Networks Corporation Port switch
US6772270B1 (en) * 2000-02-10 2004-08-03 Vicom Systems, Inc. Multi-port fibre channel controller
US6877044B2 (en) 2000-02-10 2005-04-05 Vicom Systems, Inc. Distributed storage management platform architecture
WO2001071524A1 (en) * 2000-03-22 2001-09-27 Yotta Yotta, Inc. Method and system for providing multimedia information on demand over wide area networks
US6757725B1 (en) * 2000-04-06 2004-06-29 Hewlett-Packard Development Company, Lp. Sharing an ethernet NIC between two sub-systems
US6898670B2 (en) * 2000-04-18 2005-05-24 Storeage Networking Technologies Storage virtualization in a storage area network
US6892233B1 (en) * 2000-05-04 2005-05-10 Nortel Networks Limited Optical communication network and method of remotely managing multiplexers
US6970942B1 (en) * 2000-05-08 2005-11-29 Crossroads Systems, Inc. Method of routing HTTP and FTP services across heterogeneous networks
US6859439B1 (en) * 2000-05-12 2005-02-22 International Business Machines Corporation Partition-to-partition communication employing a single channel path with integrated channel-to-channel function
US6728772B1 (en) * 2000-05-12 2004-04-27 International Business Machines Corporation Automatic configuration of a channel-to-channel connection employing channel-to-channel functioning integrated within one or more channels of a computing environment
WO2001090902A1 (en) * 2000-05-23 2001-11-29 Sangate Systems, Inc. Method and apparatus for data replication using scsi over tcp/ip
US7113984B1 (en) * 2000-06-02 2006-09-26 Nortel Networks Limited Applications for networked storage systems
US6920153B2 (en) * 2000-07-17 2005-07-19 Nortel Networks Limited Architecture and addressing scheme for storage interconnect and emerging storage service providers
US7197046B1 (en) * 2000-08-07 2007-03-27 Shrikumar Hariharasubrahmanian Systems and methods for combined protocol processing protocols
US6952734B1 (en) 2000-08-21 2005-10-04 Hewlett-Packard Development Company, L.P. Method for recovery of paths between storage area network nodes with probationary period and desperation repair
US6922414B1 (en) * 2000-08-21 2005-07-26 Hewlett-Packard Development Company, L.P. Apparatus and method for dynamic command queue depth adjustment for storage area network nodes
US7020715B2 (en) * 2000-08-22 2006-03-28 Adaptec, Inc. Protocol stack for linking storage area networks over an existing LAN, MAN, or WAN
US6944152B1 (en) * 2000-08-22 2005-09-13 Lsi Logic Corporation Data storage access through switched fabric
US7222176B1 (en) * 2000-08-28 2007-05-22 Datacore Software Corporation Apparatus and method for using storage domains for controlling data in storage area networks
US7401139B1 (en) * 2000-09-07 2008-07-15 International Business Machines Corporation Storage area network management and configuration method and apparatus via enabling in-band communications
US6831916B1 (en) * 2000-09-28 2004-12-14 Balaji Parthasarathy Host-fabric adapter and method of connecting a host system to a channel-based switched fabric in a data network
US6775719B1 (en) 2000-09-28 2004-08-10 Intel Corporation Host-fabric adapter and method of connecting a host system to a channel-based switched fabric in a data network
US7181541B1 (en) 2000-09-29 2007-02-20 Intel Corporation Host-fabric adapter having hardware assist architecture and method of connecting a host system to a channel-based switched fabric in a data network
US7702762B1 (en) * 2000-10-11 2010-04-20 International Business Machines Corporation System for host-to-host connectivity using ficon protocol over a storage area network
US7107359B1 (en) 2000-10-30 2006-09-12 Intel Corporation Host-fabric adapter having hardware assist architecture and method of connecting a host system to a channel-based switched fabric in a data network
US6948001B1 (en) * 2000-11-02 2005-09-20 Radiant Data Corporation Modular software method for independent storage nodes
US6725393B1 (en) * 2000-11-06 2004-04-20 Hewlett-Packard Development Company, L.P. System, machine, and method for maintenance of mirrored datasets through surrogate writes during storage-area network transients
US20020154633A1 (en) 2000-11-22 2002-10-24 Yeshik Shin Communications architecture for storage-based devices
US20020071450A1 (en) * 2000-12-08 2002-06-13 Gasbarro Dominic J. Host-fabric adapter having bandwidth-optimizing, area-minimal, vertical sliced memory architecture and method of connecting a host system to a channel-based switched fabric in a data network
US7287090B1 (en) * 2000-12-21 2007-10-23 Noatak Software, Llc Method and system for identifying a computing device in response to a request packet
US7546369B2 (en) * 2000-12-21 2009-06-09 Berg Mitchell T Method and system for communicating a request packet in response to a state
US20020116605A1 (en) * 2000-12-21 2002-08-22 Berg Mitchell T. Method and system for initiating execution of software in response to a state
US20020116397A1 (en) 2000-12-21 2002-08-22 Berg Mitchell T. Method and system for communicating an information packet through multiple router devices
US7421505B2 (en) * 2000-12-21 2008-09-02 Noatak Software Llc Method and system for executing protocol stack instructions to form a packet for causing a computing device to perform an operation
US7418522B2 (en) * 2000-12-21 2008-08-26 Noatak Software Llc Method and system for communicating an information packet through multiple networks
US7512686B2 (en) * 2000-12-21 2009-03-31 Berg Mitchell T Method and system for establishing a data structure of a connection with a client
US20020116532A1 (en) * 2000-12-21 2002-08-22 Berg Mitchell T. Method and system for communicating an information packet and identifying a data structure
US7237012B1 (en) * 2000-12-29 2007-06-26 Nortel Networks Limited Method and apparatus for classifying Java remote method invocation transport traffic
US7042891B2 (en) * 2001-01-04 2006-05-09 Nishan Systems, Inc. Dynamic selection of lowest latency path in a network switch
EP1370950B1 (en) * 2001-02-13 2017-12-27 NetApp, Inc. System and method for policy based storage provisioning and management
JP4483100B2 (en) * 2001-02-20 2010-06-16 株式会社日立製作所 Network connection device
US7006531B2 (en) * 2001-02-21 2006-02-28 Integrated Device Technology, Inc. Method and apparatus for transmitting streamed ingressing data through a switch fabric that provides read requests at an ingress independent request rate
US20040233910A1 (en) * 2001-02-23 2004-11-25 Wen-Shyen Chen Storage area network using a data communication protocol
WO2002069126A2 (en) * 2001-02-28 2002-09-06 Crossroads Systems, Inc. Method and system for overlapping data flow within a scsi extended copy command
US7114009B2 (en) * 2001-03-16 2006-09-26 San Valley Systems Encapsulating Fibre Channel signals for transmission over non-Fibre Channel networks
US7401126B2 (en) 2001-03-23 2008-07-15 Neteffect, Inc. Transaction switch and network interface adapter incorporating same
US7072823B2 (en) * 2001-03-30 2006-07-04 Intransa, Inc. Method and apparatus for accessing memory using Ethernet packets
US6883042B1 (en) * 2001-04-25 2005-04-19 Adaptec, Inc. Method and structure for automatic SCSI command delivery using the packetized SCSI protocol
US20030210685A1 (en) * 2001-04-27 2003-11-13 Foster Michael S. Method and system for interswitch deadlock avoidance in a communications network
US20020159458A1 (en) * 2001-04-27 2002-10-31 Foster Michael S. Method and system for reserved addressing in a communications network
US6580731B1 (en) * 2001-05-18 2003-06-17 Network Elements, Inc. Multi-stage SONET overhead processing
US7002967B2 (en) * 2001-05-18 2006-02-21 Denton I Claude Multi-protocol networking processor with data traffic support spanning local, regional and wide area networks
US6973085B1 (en) * 2001-06-18 2005-12-06 Advanced Micro Devices, Inc. Using application headers to determine InfiniBand™ priorities in an InfiniBand™ network
US20020198927A1 (en) * 2001-06-21 2002-12-26 International Business Machines Corporation Apparatus and method for routing internet protocol frames over a system area network
US7110394B1 (en) * 2001-06-25 2006-09-19 Sanera Systems, Inc. Packet switching apparatus including cascade ports and method for switching packets
US7685261B1 (en) 2001-06-29 2010-03-23 Symantec Operating Corporation Extensible architecture for the centralized discovery and management of heterogeneous SAN components
US6816889B1 (en) * 2001-07-03 2004-11-09 Advanced Micro Devices, Inc. Assignment of dual port memory banks for a CPU and a host channel adapter in an InfiniBand computing node
EP1415425B1 (en) * 2001-07-06 2019-06-26 CA, Inc. Systems and methods of information backup
US6985490B2 (en) * 2001-07-11 2006-01-10 Sancastle Technologies, Ltd. Extension of fibre channel addressing
US7289499B1 (en) * 2001-07-16 2007-10-30 Network Appliance, Inc. Integrated system and method for controlling telecommunication network data communicated over a local area network and storage data communicated over a storage area network
US7239642B1 (en) * 2001-07-16 2007-07-03 Network Appliance, Inc. Multi-protocol network interface card
US7404206B2 (en) * 2001-07-17 2008-07-22 Yottayotta, Inc. Network security devices and methods
US6912231B2 (en) * 2001-07-26 2005-06-28 Northrop Grumman Corporation Multi-broadcast bandwidth control system
US20030056000A1 (en) * 2001-07-26 2003-03-20 Nishan Systems, Inc. Transfer ready frame reordering
US7215680B2 (en) * 2001-07-26 2007-05-08 Nishan Systems, Inc. Method and apparatus for scheduling packet flow on a fibre channel arbitrated loop
US7075953B2 (en) * 2001-07-30 2006-07-11 Network-Elements, Inc. Programmable SONET framing
US20030033463A1 (en) * 2001-08-10 2003-02-13 Garnett Paul J. Computer system storage
US7110396B2 (en) * 2001-08-20 2006-09-19 Ciena Corporation System for transporting sub-rate data over a communication network
US7194550B1 (en) * 2001-08-30 2007-03-20 Sanera Systems, Inc. Providing a single hop communication path between a storage device and a network switch
US7472231B1 (en) 2001-09-07 2008-12-30 Netapp, Inc. Storage area network data cache
US7389332B1 (en) 2001-09-07 2008-06-17 Cisco Technology, Inc. Method and apparatus for supporting communications between nodes operating in a master-slave configuration
WO2003023640A2 (en) * 2001-09-07 2003-03-20 Sanrad Load balancing method for exchanging data between multiple hosts and storage entities, in ip based storage area network
JP3712369B2 (en) * 2001-09-13 2005-11-02 アライドテレシスホールディングス株式会社 Media converter and link disconnection method thereof
US6895590B2 (en) * 2001-09-26 2005-05-17 Intel Corporation Method and system enabling both legacy and new applications to access an InfiniBand fabric via a socket API
US7864758B1 (en) * 2001-09-28 2011-01-04 Emc Corporation Virtualization in a storage system
US7185062B2 (en) * 2001-09-28 2007-02-27 Emc Corporation Switch-based storage services
US7421509B2 (en) * 2001-09-28 2008-09-02 Emc Corporation Enforcing quality of service in a storage network
US6976134B1 (en) 2001-09-28 2005-12-13 Emc Corporation Pooling and provisioning storage resources in a storage network
US7707304B1 (en) * 2001-09-28 2010-04-27 Emc Corporation Storage switch for storage area network
WO2003030431A2 (en) * 2001-09-28 2003-04-10 Maranti Networks, Inc. Packet classification in a storage system
US7404000B2 (en) * 2001-09-28 2008-07-22 Emc Corporation Protocol translation in a storage system
US7558264B1 (en) 2001-09-28 2009-07-07 Emc Corporation Packet classification in a storage system
US7200144B2 (en) * 2001-10-18 2007-04-03 Qlogic, Corp. Router and methods using network addresses for virtualization
US7133907B2 (en) * 2001-10-18 2006-11-07 Sun Microsystems, Inc. Method, system, and program for configuring system resources
US7447197B2 (en) * 2001-10-18 2008-11-04 Qlogic, Corporation System and method of providing network node services
US6965559B2 (en) * 2001-10-19 2005-11-15 Sun Microsystems, Inc. Method, system, and program for discovering devices communicating through a switch
US20030084219A1 (en) * 2001-10-26 2003-05-01 Maxxan Systems, Inc. System, apparatus and method for address forwarding for a computer network
US7589737B2 (en) * 2001-10-31 2009-09-15 Hewlett-Packard Development Company, L.P. System and method for communicating graphics image data over a communication network
US7308001B2 (en) * 2001-11-16 2007-12-11 Computer Network Technology Corporation Fibre channel frame batching for IP transmission
US7424019B1 (en) 2001-11-27 2008-09-09 Marvell Israel (M.I.S.L) Ltd. Packet header altering device
US7548975B2 (en) * 2002-01-09 2009-06-16 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network through a virtual enclosure
US7433948B2 (en) * 2002-01-23 2008-10-07 Cisco Technology, Inc. Methods and apparatus for implementing virtualization of storage within a storage area network
US7197571B2 (en) * 2001-12-29 2007-03-27 International Business Machines Corporation System and method for improving backup performance of media and dynamic ready to transfer control mechanism
US7145914B2 (en) * 2001-12-31 2006-12-05 Maxxan Systems, Incorporated System and method for controlling data paths of a network processor subsystem
US7085846B2 (en) * 2001-12-31 2006-08-01 Maxxan Systems, Incorporated Buffer to buffer credit flow control for computer network
US20030126283A1 (en) * 2001-12-31 2003-07-03 Ramkrishna Prakash Architectural basis for the bridging of SAN and LAN infrastructures
US7155494B2 (en) * 2002-01-09 2006-12-26 Sancastle Technologies Ltd. Mapping between virtual local area networks and fibre channel zones
US20030131128A1 (en) * 2002-01-10 2003-07-10 Stanton Kevin B. Vlan mpls mapping: method to establish end-to-traffic path spanning local area network and a global network
US20030135609A1 (en) * 2002-01-16 2003-07-17 Sun Microsystems, Inc. Method, system, and program for determining a modification of a system resource configuration
US7349992B2 (en) * 2002-01-24 2008-03-25 Emulex Design & Manufacturing Corporation System for communication with a storage area network
US6963932B2 (en) * 2002-01-30 2005-11-08 Intel Corporation Intermediate driver having a fail-over function for a virtual network interface card in a system utilizing Infiniband architecture
US20040205252A1 (en) * 2002-01-31 2004-10-14 Brocade Communications Systems, Inc. Methods and devices for converting between trunked and single-link data transmission in a fibre channel network
US20030149869A1 (en) * 2002-02-01 2003-08-07 Paul Gleichauf Method and system for securely storing and trasmitting data by applying a one-time pad
US7173943B1 (en) * 2002-02-26 2007-02-06 Computer Access Technology Corporation Protocol analyzer and time precise method for capturing multi-directional packet traffic
US7133416B1 (en) * 2002-03-05 2006-11-07 Mcdata Corporation Converting data signals in a multiple communication protocol system area network
US7421478B1 (en) 2002-03-07 2008-09-02 Cisco Technology, Inc. Method and apparatus for exchanging heartbeat messages and configuration information between nodes operating in a master-slave configuration
US8051197B2 (en) 2002-03-29 2011-11-01 Brocade Communications Systems, Inc. Network congestion management systems and methods
US7295561B1 (en) 2002-04-05 2007-11-13 Ciphermax, Inc. Fibre channel implementation using network processors
US7406038B1 (en) 2002-04-05 2008-07-29 Ciphermax, Incorporated System and method for expansion of computer network switching system without disruption thereof
US7379970B1 (en) 2002-04-05 2008-05-27 Ciphermax, Inc. Method and system for reduced distributed event handling in a network environment
US7307995B1 (en) 2002-04-05 2007-12-11 Ciphermax, Inc. System and method for linking a plurality of network switches
US20030195956A1 (en) * 2002-04-15 2003-10-16 Maxxan Systems, Inc. System and method for allocating unique zone membership
US7433952B1 (en) 2002-04-22 2008-10-07 Cisco Technology, Inc. System and method for interconnecting a storage area network
US7415535B1 (en) * 2002-04-22 2008-08-19 Cisco Technology, Inc. Virtual MAC address system and method
US6895461B1 (en) * 2002-04-22 2005-05-17 Cisco Technology, Inc. Method and apparatus for accessing remote storage using SCSI and an IP network
US7165258B1 (en) * 2002-04-22 2007-01-16 Cisco Technology, Inc. SCSI-based storage area network having a SCSI router that routes traffic between SCSI and IP networks
US7188194B1 (en) * 2002-04-22 2007-03-06 Cisco Technology, Inc. Session-based target/LUN mapping for a storage area network and associated method
US7200610B1 (en) 2002-04-22 2007-04-03 Cisco Technology, Inc. System and method for configuring fibre-channel devices
US7281062B1 (en) * 2002-04-22 2007-10-09 Cisco Technology, Inc. Virtual SCSI bus for SCSI-based storage area network
US7587465B1 (en) 2002-04-22 2009-09-08 Cisco Technology, Inc. Method and apparatus for configuring nodes as masters or slaves
US20030200330A1 (en) * 2002-04-22 2003-10-23 Maxxan Systems, Inc. System and method for load-sharing computer network switch
US7398326B2 (en) * 2002-04-25 2008-07-08 International Business Machines Corporation Methods for management of mixed protocol storage area networks
US7277442B1 (en) * 2002-04-26 2007-10-02 At&T Corp. Ethernet-to-ATM interworking that conserves VLAN assignments
US7404012B2 (en) * 2002-05-06 2008-07-22 Qlogic, Corporation System and method for dynamic link aggregation in a shared I/O subsystem
US7356608B2 (en) * 2002-05-06 2008-04-08 Qlogic, Corporation System and method for implementing LAN within shared I/O subsystem
US7447778B2 (en) * 2002-05-06 2008-11-04 Qlogic, Corporation System and method for a shared I/O subsystem
US7328284B2 (en) * 2002-05-06 2008-02-05 Qlogic, Corporation Dynamic configuration of network data flow using a shared I/O subsystem
JP4032816B2 (en) * 2002-05-08 2008-01-16 株式会社日立製作所 Storage network topology management system
US7240098B1 (en) 2002-05-09 2007-07-03 Cisco Technology, Inc. System, method, and software for a virtual host bus adapter in a storage-area network
US7509436B1 (en) 2002-05-09 2009-03-24 Cisco Technology, Inc. System and method for increased virtual driver throughput
US7385971B1 (en) 2002-05-09 2008-06-10 Cisco Technology, Inc. Latency reduction in network data transfer operations
JP2003330762A (en) * 2002-05-09 2003-11-21 Hitachi Ltd Control method for storage system, storage system, switch and program
US7471628B2 (en) * 2002-06-10 2008-12-30 Cisco Technology, Inc. Intelligent flow control management to extend fibre channel link full performance range
US9787524B1 (en) * 2002-07-23 2017-10-10 Brocade Communications Systems, Inc. Fibre channel virtual host bus adapter
JP3869769B2 (en) * 2002-07-24 2007-01-17 株式会社日立製作所 Switching node device for storage network and access method of remote storage device
US7206314B2 (en) * 2002-07-30 2007-04-17 Brocade Communications Systems, Inc. Method and apparatus for transparent communication between a fibre channel network and an infiniband network
US20040022200A1 (en) * 2002-07-31 2004-02-05 Sun Microsystems, Inc. Method, system, and program for providing information on components within a network
US6826631B2 (en) * 2002-07-31 2004-11-30 Intel Corporation System and method for indicating the status of a communications link and traffic activity on non-protocol aware modules
US20040059806A1 (en) * 2002-07-31 2004-03-25 Webb Randall K. System and method for indicating the status of a communications link/traffic activity on non-protocol aware modules
US7143615B2 (en) * 2002-07-31 2006-12-05 Sun Microsystems, Inc. Method, system, and program for discovering components within a network
US20040024887A1 (en) * 2002-07-31 2004-02-05 Sun Microsystems, Inc. Method, system, and program for generating information on components within a network
US7263108B2 (en) * 2002-08-06 2007-08-28 Netxen, Inc. Dual-mode network storage systems and methods
US20040030766A1 (en) * 2002-08-12 2004-02-12 Michael Witkowski Method and apparatus for switch fabric configuration
US7616631B2 (en) * 2002-08-14 2009-11-10 Lsi Corporation Method and apparatus for debugging protocol traffic between devices in integrated subsystems
US7653526B1 (en) 2002-08-16 2010-01-26 Cisco Technology, Inc. Method and system for emulating an ethernet link over a sonet path
US8805918B1 (en) 2002-09-11 2014-08-12 Cisco Technology, Inc. Methods and apparatus for implementing exchange management for virtualization of storage within a storage area network
US20040054808A1 (en) * 2002-09-13 2004-03-18 Sun Microsystems, Inc. Method and apparatus for bi-directional translation of naming service data
KR100458373B1 (en) * 2002-09-18 2004-11-26 전자부품연구원 Method and apparatus for integration processing of different network protocols and multimedia traffics
US7401338B1 (en) 2002-09-27 2008-07-15 Symantec Operating Corporation System and method for an access layer application programming interface for managing heterogeneous components of a storage area network
US8024418B1 (en) * 2002-10-25 2011-09-20 Cisco Technology, Inc. Reserve release proxy
US20040081196A1 (en) * 2002-10-29 2004-04-29 Elliott Stephen J. Protocol independent hub
US20040093607A1 (en) * 2002-10-29 2004-05-13 Elliott Stephen J System providing operating system independent access to data storage devices
US20080008202A1 (en) * 2002-10-31 2008-01-10 Terrell William C Router with routing processors and methods for virtualization
US7701953B2 (en) * 2002-11-04 2010-04-20 At&T Intellectual Property I, L.P. Client server SVC-based DSL service
US7602788B2 (en) * 2002-11-04 2009-10-13 At&T Intellectual Property I, L.P. Peer to peer SVC-based DSL service
KR100449807B1 (en) * 2002-12-20 2004-09-22 한국전자통신연구원 System for controlling Data Transfer Protocol with a Host Bus Interface
US7382788B2 (en) * 2002-12-24 2008-06-03 Applied Micro Circuit Corporation Method and apparatus for implementing a data frame processing model
US20040120333A1 (en) * 2002-12-24 2004-06-24 David Geddes Method and apparatus for controlling information flow through a protocol bridge
US7782784B2 (en) * 2003-01-10 2010-08-24 Cisco Technology, Inc. Port analyzer adapter
US7899048B1 (en) 2003-01-15 2011-03-01 Cisco Technology, Inc. Method and apparatus for remotely monitoring network traffic through a generic network
US8346884B2 (en) 2003-01-21 2013-01-01 Nextio Inc. Method and apparatus for a shared I/O network interface controller
US7664909B2 (en) * 2003-04-18 2010-02-16 Nextio, Inc. Method and apparatus for a shared I/O serial ATA controller
US7457906B2 (en) * 2003-01-21 2008-11-25 Nextio, Inc. Method and apparatus for shared I/O in a load/store fabric
US7917658B2 (en) * 2003-01-21 2011-03-29 Emulex Design And Manufacturing Corporation Switching apparatus and method for link initialization in a shared I/O environment
US7512717B2 (en) * 2003-01-21 2009-03-31 Nextio Inc. Fibre channel controller shareable by a plurality of operating system domains within a load-store architecture
US7617333B2 (en) * 2003-01-21 2009-11-10 Nextio Inc. Fibre channel controller shareable by a plurality of operating system domains within a load-store architecture
US8032659B2 (en) * 2003-01-21 2011-10-04 Nextio Inc. Method and apparatus for a shared I/O network interface controller
US7103064B2 (en) * 2003-01-21 2006-09-05 Nextio Inc. Method and apparatus for shared I/O in a load/store fabric
US7836211B2 (en) * 2003-01-21 2010-11-16 Emulex Design And Manufacturing Corporation Shared input/output load-store architecture
US7493416B2 (en) 2003-01-21 2009-02-17 Nextio Inc. Fibre channel controller shareable by a plurality of operating system domains within a load-store architecture
US7502370B2 (en) * 2003-01-21 2009-03-10 Nextio Inc. Network controller for obtaining a plurality of network port identifiers in response to load-store transactions from a corresponding plurality of operating system domains within a load-store architecture
US8102843B2 (en) * 2003-01-21 2012-01-24 Emulex Design And Manufacturing Corporation Switching apparatus and method for providing shared I/O within a load-store fabric
US7953074B2 (en) * 2003-01-21 2011-05-31 Emulex Design And Manufacturing Corporation Apparatus and method for port polarity initialization in a shared I/O device
US7698483B2 (en) * 2003-01-21 2010-04-13 Nextio, Inc. Switching apparatus and method for link initialization in a shared I/O environment
US7103711B2 (en) * 2003-01-21 2006-09-05 Brocade Communications Systems, Inc. Data logging by storage area network devices to a reserved storage area on the network
US7046668B2 (en) * 2003-01-21 2006-05-16 Pettey Christopher J Method and apparatus for shared I/O in a load/store fabric
US7957409B2 (en) * 2003-01-23 2011-06-07 Cisco Technology, Inc. Methods and devices for transmitting data between storage area networks
US7738493B2 (en) * 2003-01-23 2010-06-15 Cisco Technology, Inc. Methods and devices for transmitting data between storage area networks
GB2397966B (en) * 2003-02-01 2005-04-20 3Com Corp High-speed switch architecture
US7317689B1 (en) 2003-02-10 2008-01-08 Foundry Networks, Inc. System and method to access and address high-speed interface converter devices
US7817656B1 (en) 2003-02-13 2010-10-19 Cisco Technology, Inc. Fibre-channel over-subscription over DWDM/SONET/SDH optical transport systems
US7831736B1 (en) 2003-02-27 2010-11-09 Cisco Technology, Inc. System and method for supporting VLANs in an iSCSI
US7461131B2 (en) * 2003-03-07 2008-12-02 International Business Machines Corporation Use of virtual targets for preparing and servicing requests for server-free data transfer operations
US7020814B2 (en) * 2003-03-18 2006-03-28 Cisco Technology, Inc. Method and system for emulating a Fiber Channel link over a SONET/SDH path
US7295572B1 (en) 2003-03-26 2007-11-13 Cisco Technology, Inc. Storage router and method for routing IP datagrams between data path processors using a fibre channel switch
US7904599B1 (en) 2003-03-28 2011-03-08 Cisco Technology, Inc. Synchronization and auditing of zone configuration data in storage-area networks
US7433300B1 (en) 2003-03-28 2008-10-07 Cisco Technology, Inc. Synchronization of configuration data in storage-area networks
US7706294B2 (en) 2003-03-31 2010-04-27 Cisco Technology, Inc. Apparatus and method for enabling intelligent Fibre-Channel connectivity over transport
US7526527B1 (en) 2003-03-31 2009-04-28 Cisco Technology, Inc. Storage area network interconnect server
US7145877B2 (en) * 2003-03-31 2006-12-05 Cisco Technology, Inc. Apparatus and method for distance extension of fibre-channel over transport
US20040196841A1 (en) * 2003-04-04 2004-10-07 Tudor Alexander L. Assisted port monitoring with distributed filtering
US20040215764A1 (en) * 2003-04-23 2004-10-28 Sun Microsystems, Inc. Method, system, and program for rendering a visualization of aggregations of network devices
GB2401279B (en) * 2003-04-29 2005-06-01 3Com Corp Switch module architecture
US9712613B2 (en) * 2003-04-29 2017-07-18 Brocade Communications Systems, Inc. Fibre channel fabric copy service
JP4060235B2 (en) 2003-05-22 2008-03-12 株式会社日立製作所 Disk array device and disk array device control method
JP2004348464A (en) * 2003-05-22 2004-12-09 Hitachi Ltd Storage device and communication signal shaping circuit
US7359975B2 (en) * 2003-05-22 2008-04-15 International Business Machines Corporation Method, system, and program for performing a data transfer operation with respect to source and target storage devices in a network
US7353299B2 (en) 2003-05-29 2008-04-01 International Business Machines Corporation Method and apparatus for managing autonomous third party data transfers
US7356622B2 (en) * 2003-05-29 2008-04-08 International Business Machines Corporation Method and apparatus for managing and formatting metadata in an autonomous operation conducted by a third party
US7885256B1 (en) 2003-05-30 2011-02-08 Symantec Operating Corporation SAN fabric discovery
US7187650B2 (en) * 2003-06-10 2007-03-06 Cisco Technology, Inc. Fibre channel frame-mode GFP with distributed delimiter
US7451208B1 (en) 2003-06-28 2008-11-11 Cisco Technology, Inc. Systems and methods for network address failover
US7515593B2 (en) * 2003-07-03 2009-04-07 Cisco Technology, Inc. Method and system for efficient flow control for client data frames over GFP across a SONET/SDH transport path
US7644194B2 (en) * 2003-07-14 2010-01-05 Broadcom Corporation Method and system for addressing a plurality of Ethernet controllers integrated into a single chip which utilizes a single bus interface
US7552294B1 (en) 2003-08-07 2009-06-23 Crossroads Systems, Inc. System and method for processing multiple concurrent extended copy commands to a single destination device
US7447852B1 (en) 2003-08-07 2008-11-04 Crossroads Systems, Inc. System and method for message and error reporting for multiple concurrent extended copy commands to a single destination device
US7251708B1 (en) 2003-08-07 2007-07-31 Crossroads Systems, Inc. System and method for maintaining and reporting a log of multi-threaded backups
US7409442B2 (en) * 2003-08-25 2008-08-05 International Business Machines Corporation Method for communicating control messages between a first device and a second device
US8165136B1 (en) 2003-09-03 2012-04-24 Cisco Technology, Inc. Virtual port based SPAN
US20050066045A1 (en) * 2003-09-03 2005-03-24 Johnson Neil James Integrated network interface supporting multiple data transfer protocols
US7474666B2 (en) 2003-09-03 2009-01-06 Cisco Technology, Inc. Switch port analyzers
US8417834B2 (en) * 2003-09-10 2013-04-09 Broadcom Corporation Unified infrastructure over ethernet
US8285881B2 (en) * 2003-09-10 2012-10-09 Broadcom Corporation System and method for load balancing and fail over
US20050114469A1 (en) * 2003-09-16 2005-05-26 Manabu Nakamura Information processing apparatus with a network service function and method of providing network services
JP4137757B2 (en) * 2003-10-01 2008-08-20 株式会社日立製作所 Network converter and information processing system
US20050078704A1 (en) * 2003-10-14 2005-04-14 International Business Machines Corporation Method and apparatus for translating data packets from one network protocol to another
US20130208732A1 (en) * 2012-02-15 2013-08-15 Alex E. Henderson Transporting Fibre Channel over Ethernet
US11108591B2 (en) * 2003-10-21 2021-08-31 John W. Hayes Transporting fibre channel over ethernet
US7603453B1 (en) * 2003-10-24 2009-10-13 Network Appliance, Inc. Creating links between nodes connected to a fibre channel (FC) fabric
US7533175B1 (en) * 2003-10-24 2009-05-12 Network Appliance, Inc. Network address resolution and forwarding TCP/IP packets over a fibre channel network
US7779137B1 (en) * 2003-10-24 2010-08-17 Network Appliance, Inc. IP aliasing and address resolution using a fibre channel (FC) fabric name server
US7613785B2 (en) * 2003-11-20 2009-11-03 International Business Machines Corporation Decreased response time for peer-to-peer remote copy write operation
JP4156499B2 (en) 2003-11-28 2008-09-24 株式会社日立製作所 Disk array device
US7934023B2 (en) * 2003-12-01 2011-04-26 Cisco Technology, Inc. Apparatus and method for performing fast fibre channel write operations over relatively high latency networks
US7620695B2 (en) * 2003-12-02 2009-11-17 International Business Machines Corporation Storing fibre channel information on an Infiniband administration data base
US7586942B2 (en) * 2003-12-09 2009-09-08 Dell Products L.P. Identifying host computers at a physical layer
US7684440B1 (en) * 2003-12-18 2010-03-23 Nvidia Corporation Method and apparatus for maximizing peer-to-peer frame sizes within a network supporting a plurality of frame sizes
JP4497918B2 (en) * 2003-12-25 2010-07-07 株式会社日立製作所 Storage system
US7391728B2 (en) * 2003-12-30 2008-06-24 Cisco Technology, Inc. Apparatus and method for improved Fibre Channel oversubscription over transport
US7447788B2 (en) 2004-01-27 2008-11-04 Dell Products L.P. Providing host information to devices in multi SCSI transport protocols
JP4634049B2 (en) 2004-02-04 2011-02-16 株式会社日立製作所 Error notification control in disk array system
US7949792B2 (en) * 2004-02-27 2011-05-24 Cisco Technology, Inc. Encoding a TCP offload engine within FCP
US20050192967A1 (en) * 2004-03-01 2005-09-01 Cisco Technology, Inc. Apparatus and method for performing fast fibre channel write operations over relatively high latency networks
US7240135B2 (en) * 2004-03-05 2007-07-03 International Business Machines Corporation Method of balancing work load with prioritized tasks across a multitude of communication ports
US7325075B1 (en) 2004-03-15 2008-01-29 Hewlett-Packard Development Company, L.P. Methods for address and name discovery for Ethernet entities
US7505261B2 (en) * 2004-03-18 2009-03-17 Hewlett-Packard Development Company, L.P. Electrical-optical signal conversion for automated storage systems
US8543737B2 (en) * 2004-05-12 2013-09-24 Broadcom Corporation System and method to control access to data stored in a data storage device
US7676603B2 (en) * 2004-04-20 2010-03-09 Intel Corporation Write combining protocol between processors and chipsets
US7529781B2 (en) * 2004-04-30 2009-05-05 Emc Corporation Online initial mirror synchronization and mirror synchronization verification in storage area networks
US8996455B2 (en) * 2004-04-30 2015-03-31 Netapp, Inc. System and method for configuring a storage network utilizing a multi-protocol storage appliance
JP2005339323A (en) * 2004-05-28 2005-12-08 Hitachi Ltd Storage system, computing system, and interface module
US8228931B1 (en) * 2004-07-15 2012-07-24 Ciena Corporation Distributed virtual storage switch
US9264384B1 (en) 2004-07-22 2016-02-16 Oracle International Corporation Resource virtualization mechanism including virtual host bus adapters
KR100703494B1 (en) * 2004-08-09 2007-04-03 삼성전자주식회사 Apparatus and Method for Transporting/receiving of Voice over Internet Protocol Packets with a User Datagram Protocol checksum in a mobile communication system
US7814189B2 (en) * 2004-08-12 2010-10-12 Broadcom Corporation Method and system to connect multiple SCSI initiators to a fibre channel fabric topology using a single N-port
US7907626B2 (en) * 2004-08-12 2011-03-15 Broadcom Corporation Method and system to allocate exchange identifications for Fibre Channel N-Port aggregation
US7969971B2 (en) 2004-10-22 2011-06-28 Cisco Technology, Inc. Ethernet extension for the data center
US7564869B2 (en) * 2004-10-22 2009-07-21 Cisco Technology, Inc. Fibre channel over ethernet
US8238347B2 (en) * 2004-10-22 2012-08-07 Cisco Technology, Inc. Fibre channel over ethernet
US7830793B2 (en) * 2004-10-22 2010-11-09 Cisco Technology, Inc. Network device architecture for consolidating input/output and reducing latency
US7801125B2 (en) * 2004-10-22 2010-09-21 Cisco Technology, Inc. Forwarding table reduction and multipath network forwarding
US7602720B2 (en) * 2004-10-22 2009-10-13 Cisco Technology, Inc. Active queue management methods and devices
US7653066B2 (en) * 2004-11-04 2010-01-26 Cisco Technology Inc. Method and apparatus for guaranteed in-order delivery for FICON over SONET/SDH transport
US7782845B2 (en) * 2004-11-19 2010-08-24 International Business Machines Corporation Arbitrated loop address management apparatus method and system
US7620047B2 (en) * 2004-11-23 2009-11-17 Emerson Network Power - Embedded Computing, Inc. Method of transporting a RapidIO packet over an IP packet network
US20060114933A1 (en) * 2004-12-01 2006-06-01 Sandy Douglas L Method of transporting an IP packet over a RapidIO network
US7827261B1 (en) 2004-12-22 2010-11-02 Crossroads Systems, Inc. System and method for device management
US20060151549A1 (en) * 2005-01-12 2006-07-13 Fisher David G Agricultural spreading device
US7672323B2 (en) * 2005-01-14 2010-03-02 Cisco Technology, Inc. Dynamic and intelligent buffer management for SAN extension
US7535917B1 (en) 2005-02-22 2009-05-19 Netapp, Inc. Multi-protocol network adapter
US7577151B2 (en) * 2005-04-01 2009-08-18 International Business Machines Corporation Method and apparatus for providing a network connection table
US7586936B2 (en) 2005-04-01 2009-09-08 International Business Machines Corporation Host Ethernet adapter for networking offload in server environment
US7492771B2 (en) 2005-04-01 2009-02-17 International Business Machines Corporation Method for performing a packet header lookup
US7508771B2 (en) 2005-04-01 2009-03-24 International Business Machines Corporation Method for reducing latency in a host ethernet adapter (HEA)
US7903687B2 (en) 2005-04-01 2011-03-08 International Business Machines Corporation Method for scheduling, writing, and reading data inside the partitioned buffer of a switch, router or packet processing device
US7606166B2 (en) * 2005-04-01 2009-10-20 International Business Machines Corporation System and method for computing a blind checksum in a host ethernet adapter (HEA)
US20060221953A1 (en) * 2005-04-01 2006-10-05 Claude Basso Method and apparatus for blind checksum and correction for network transmissions
US7697536B2 (en) * 2005-04-01 2010-04-13 International Business Machines Corporation Network communications for operating system partitions
US7706409B2 (en) 2005-04-01 2010-04-27 International Business Machines Corporation System and method for parsing, filtering, and computing the checksum in a host Ethernet adapter (HEA)
US7881332B2 (en) * 2005-04-01 2011-02-01 International Business Machines Corporation Configurable ports for a host ethernet adapter
US8458280B2 (en) * 2005-04-08 2013-06-04 Intel-Ne, Inc. Apparatus and method for packet transmission over a high speed network supporting remote direct memory access operations
EP2328089B1 (en) * 2005-04-20 2014-07-09 Axxana (Israel) Ltd. Remote data mirroring system
US9195397B2 (en) 2005-04-20 2015-11-24 Axxana (Israel) Ltd. Disaster-proof data recovery
US7609649B1 (en) * 2005-04-26 2009-10-27 Cisco Technology, Inc. Methods and apparatus for improving network based virtualization performance
US7565442B1 (en) 2005-06-08 2009-07-21 Cisco Technology, Inc. Method and system for supporting distance extension in networks having Y-cable protection
US7924873B2 (en) * 2005-07-26 2011-04-12 International Business Machines Corporation Dynamic translational topology layer for enabling connectivity for protocol aware applications
US9813283B2 (en) 2005-08-09 2017-11-07 Oracle International Corporation Efficient data transfer between servers and remote peripherals
US20070058620A1 (en) * 2005-08-31 2007-03-15 Mcdata Corporation Management of a switch fabric through functionality conservation
US9143841B2 (en) 2005-09-29 2015-09-22 Brocade Communications Systems, Inc. Federated management of intelligent service modules
US7961621B2 (en) * 2005-10-11 2011-06-14 Cisco Technology, Inc. Methods and devices for backward congestion notification
JP2007140699A (en) * 2005-11-15 2007-06-07 Hitachi Ltd Computer system and storage device and management server and communication control method
KR100799574B1 (en) 2005-12-08 2008-01-31 한국전자통신연구원 Switched router system with QoS guaranteed
US7889762B2 (en) * 2006-01-19 2011-02-15 Intel-Ne, Inc. Apparatus and method for in-line insertion and removal of markers
US7782905B2 (en) * 2006-01-19 2010-08-24 Intel-Ne, Inc. Apparatus and method for stateless CRC calculation
US8006011B2 (en) * 2006-02-07 2011-08-23 Cisco Technology, Inc. InfiniBand boot bridge with fibre channel target
US20070208820A1 (en) * 2006-02-17 2007-09-06 Neteffect, Inc. Apparatus and method for out-of-order placement and in-order completion reporting of remote direct memory access operations
US8078743B2 (en) * 2006-02-17 2011-12-13 Intel-Ne, Inc. Pipelined processing of RDMA-type network transactions
US8316156B2 (en) * 2006-02-17 2012-11-20 Intel-Ne, Inc. Method and apparatus for interfacing device drivers to single multi-function adapter
US7849232B2 (en) 2006-02-17 2010-12-07 Intel-Ne, Inc. Method and apparatus for using a single multi-function adapter with different operating systems
US7953866B2 (en) * 2006-03-22 2011-05-31 Mcdata Corporation Protocols for connecting intelligent service modules in a storage area network
GB0608085D0 (en) * 2006-04-25 2006-05-31 Intesym Ltd Network interface and router
US7539967B1 (en) * 2006-05-05 2009-05-26 Altera Corporation Self-configuring components on a device
US20080034167A1 (en) * 2006-08-03 2008-02-07 Cisco Technology, Inc. Processing a SCSI reserve in a network implementing network-based virtualization
US7769842B2 (en) * 2006-08-08 2010-08-03 Endl Texas, Llc Storage management unit to configure zoning, LUN masking, access controls, or other storage area network parameters
US8948199B2 (en) * 2006-08-30 2015-02-03 Mellanox Technologies Ltd. Fibre channel processing by a host channel adapter
US20080056287A1 (en) * 2006-08-30 2008-03-06 Mellanox Technologies Ltd. Communication between an infiniband fabric and a fibre channel network
US8774215B2 (en) * 2006-09-01 2014-07-08 Emulex Corporation Fibre channel over Ethernet
EP1912411B1 (en) * 2006-10-12 2010-03-31 Koninklijke KPN N.V. Method and system for service preparation of a residential network access device
US8055726B1 (en) * 2006-10-31 2011-11-08 Qlogic, Corporation Method and system for writing network data
US7925758B1 (en) 2006-11-09 2011-04-12 Symantec Operating Corporation Fibre accelerated pipe data transport
US20080181243A1 (en) * 2006-12-15 2008-07-31 Brocade Communications Systems, Inc. Ethernet forwarding in high performance fabrics
US20080159260A1 (en) * 2006-12-15 2008-07-03 Brocade Communications Systems, Inc. Fibre channel over ethernet frame
US20080159277A1 (en) * 2006-12-15 2008-07-03 Brocade Communications Systems, Inc. Ethernet over fibre channel
US7921243B1 (en) * 2007-01-05 2011-04-05 Marvell International Ltd. System and method for a DDR SDRAM controller
US8259720B2 (en) * 2007-02-02 2012-09-04 Cisco Technology, Inc. Triple-tier anycast addressing
US7917682B2 (en) * 2007-06-27 2011-03-29 Emulex Design & Manufacturing Corporation Multi-protocol controller that supports PCIe, SAS and enhanced Ethernet
US8149710B2 (en) 2007-07-05 2012-04-03 Cisco Technology, Inc. Flexible and hierarchical dynamic buffer allocation
TWI339964B (en) * 2007-07-31 2011-04-01 Ind Tech Res Inst Management architecture and diagnostic method for remote configuration of heterogeneous local networks
US8121038B2 (en) 2007-08-21 2012-02-21 Cisco Technology, Inc. Backward congestion notification
US8953486B2 (en) * 2007-11-09 2015-02-10 Cisco Technology, Inc. Global auto-configuration of network devices connected to multipoint virtual connections
US8667095B2 (en) * 2007-11-09 2014-03-04 Cisco Technology, Inc. Local auto-configuration of network devices connected to multipoint virtual connections
US8583780B2 (en) * 2007-11-20 2013-11-12 Brocade Communications Systems, Inc. Discovery of duplicate address in a network by reviewing discovery frames received at a port
US8108454B2 (en) * 2007-12-17 2012-01-31 Brocade Communications Systems, Inc. Address assignment in Fibre Channel over Ethernet environments
KR100948840B1 (en) * 2007-12-17 2010-03-22 한국전자통신연구원 An audio codec bit-rate control method to assure the QoS of the voice in WLAN
US9137175B2 (en) 2007-12-19 2015-09-15 Emulex Corporation High performance ethernet networking utilizing existing fibre channel fabric HBA technology
US8706862B2 (en) * 2007-12-21 2014-04-22 At&T Intellectual Property I, L.P. Methods and apparatus for performing non-intrusive data link layer performance measurement in communication networks
US8527663B2 (en) * 2007-12-21 2013-09-03 At&T Intellectual Property I, L.P. Methods and apparatus for performing non-intrusive network layer performance measurement in communication networks
JP4586873B2 (en) * 2008-03-28 2010-11-24 セイコーエプソン株式会社 Socket management apparatus and method
US8359379B1 (en) * 2008-04-30 2013-01-22 Netapp, Inc. Method of implementing IP-based proxy server for ISCSI services
WO2009136933A1 (en) * 2008-05-08 2009-11-12 Hewlett-Packard Development Company, L.P. A method for interfacing a fibre channel network with an ethernet based network
US20090296726A1 (en) * 2008-06-03 2009-12-03 Brocade Communications Systems, Inc. ACCESS CONTROL LIST MANAGEMENT IN AN FCoE ENVIRONMENT
US8400942B2 (en) * 2008-11-12 2013-03-19 Emulex Design & Manufacturing Corporation Large frame path MTU discovery and communication for FCoE devices
US8848575B2 (en) 2009-02-23 2014-09-30 Brocade Communications Systems, Inc. High availability and multipathing for fibre channel over ethernet
WO2010105092A1 (en) * 2009-03-12 2010-09-16 James Paul Rivers Providing fibre channel services and forwarding fibre channel over ethernet frames
US20100268855A1 (en) * 2009-04-16 2010-10-21 Sunny Koul Ethernet port on a controller for management of direct-attached storage subsystems from a management console
US8365057B2 (en) * 2009-07-30 2013-01-29 Mellanox Technologies Ltd Processing of data integrity field
US8745243B2 (en) * 2009-08-03 2014-06-03 Brocade Communications Systems, Inc. FCIP communications with load sharing and failover
US9973446B2 (en) 2009-08-20 2018-05-15 Oracle International Corporation Remote shared server peripherals over an Ethernet network for resource virtualization
WO2011067702A1 (en) * 2009-12-02 2011-06-09 Axxana (Israel) Ltd. Distributed intelligent network
US8279891B2 (en) * 2009-12-15 2012-10-02 Cisco Technology, Inc. Techniques for ethernet optical reach improvement
US9015333B2 (en) 2009-12-18 2015-04-21 Cisco Technology, Inc. Apparatus and methods for handling network file operations over a fibre channel network
US9632930B2 (en) * 2010-03-03 2017-04-25 Cisco Technology, Inc. Sub-area FCID allocation scheme
US8711864B1 (en) 2010-03-30 2014-04-29 Chengdu Huawei Symantec Technologies Co., Ltd. System and method for supporting fibre channel over ethernet communication
US8554974B2 (en) 2010-05-27 2013-10-08 International Business Machines Corporation Expanding functionality of one or more hard drive bays in a computing system
US8514856B1 (en) 2010-06-24 2013-08-20 Cisco Technology, Inc. End-to-end fibre channel over ethernet
US9331963B2 (en) 2010-09-24 2016-05-03 Oracle International Corporation Wireless host I/O using virtualized I/O controllers
US8769173B2 (en) * 2010-10-14 2014-07-01 International Business Machines Corporation Systems and methods for detecting supported small form-factor pluggable (SFP) devices
US9203876B2 (en) * 2011-03-16 2015-12-01 International Business Machines Corporation Automatic registration of devices
US8650300B2 (en) * 2011-06-07 2014-02-11 International Business Machines Corporation Transparent heterogenous link pairing
US10218756B2 (en) 2012-01-06 2019-02-26 Comcast Cable Communications, Llc Streamlined delivery of video content
US8798052B2 (en) 2012-08-15 2014-08-05 International Business Machines Corporation Relaying frames in a large layer 2 network fabric
US9083550B2 (en) 2012-10-29 2015-07-14 Oracle International Corporation Network virtualization over infiniband
US9769062B2 (en) 2013-06-12 2017-09-19 International Business Machines Corporation Load balancing input/output operations between two computers
US9274989B2 (en) 2013-06-12 2016-03-01 International Business Machines Corporation Impersonating SCSI ports through an intermediate proxy
US9779003B2 (en) 2013-06-12 2017-10-03 International Business Machines Corporation Safely mapping and unmapping host SCSI volumes
US9274916B2 (en) 2013-06-12 2016-03-01 International Business Machines Corporation Unit attention processing in proxy and owner storage systems
US8819317B1 (en) 2013-06-12 2014-08-26 International Business Machines Corporation Processing input/output requests using proxy and owner storage systems
US9940019B2 (en) 2013-06-12 2018-04-10 International Business Machines Corporation Online migration of a logical volume between storage systems
US9780993B2 (en) * 2013-06-26 2017-10-03 Amazon Technologies, Inc. Producer computing system leasing on behalf of consumer computing system
US9369518B2 (en) 2013-06-26 2016-06-14 Amazon Technologies, Inc. Producer system partitioning among leasing agent systems
US9430412B2 (en) 2013-06-26 2016-08-30 Cnex Labs, Inc. NVM express controller for remote access of memory and I/O over Ethernet-type networks
US10063638B2 (en) * 2013-06-26 2018-08-28 Cnex Labs, Inc. NVM express controller for remote access of memory and I/O over ethernet-type networks
US9843631B2 (en) 2013-06-26 2017-12-12 Amazon Technologies, Inc. Producer system selection
US9350801B2 (en) 2013-06-26 2016-05-24 Amazon Technologies, Inc. Managing client access to a plurality of computing systems
US9785355B2 (en) 2013-06-26 2017-10-10 Cnex Labs, Inc. NVM express controller for remote access of memory and I/O over ethernet-type networks
US9785356B2 (en) 2013-06-26 2017-10-10 Cnex Labs, Inc. NVM express controller for remote access of memory and I/O over ethernet-type networks
US10769028B2 (en) 2013-10-16 2020-09-08 Axxana (Israel) Ltd. Zero-transaction-loss recovery for database systems
US9853873B2 (en) 2015-01-10 2017-12-26 Cisco Technology, Inc. Diagnosis and throughput measurement of fibre channel ports in a storage area network environment
US9900250B2 (en) 2015-03-26 2018-02-20 Cisco Technology, Inc. Scalable handling of BGP route information in VXLAN with EVPN control plane
US10222986B2 (en) 2015-05-15 2019-03-05 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US10379958B2 (en) 2015-06-03 2019-08-13 Axxana (Israel) Ltd. Fast archiving for database systems
US11588783B2 (en) 2015-06-10 2023-02-21 Cisco Technology, Inc. Techniques for implementing IPV6-based distributed storage space
US10778765B2 (en) 2015-07-15 2020-09-15 Cisco Technology, Inc. Bid/ask protocol in scale-out NVMe storage
US9892075B2 (en) 2015-12-10 2018-02-13 Cisco Technology, Inc. Policy driven storage in a microserver computing environment
DE102015016616A1 (en) * 2015-12-22 2017-06-22 Giesecke & Devrient Gmbh Device and method for connecting a production device to a network
US10140172B2 (en) 2016-05-18 2018-11-27 Cisco Technology, Inc. Network-aware storage repairs
US20170351639A1 (en) 2016-06-06 2017-12-07 Cisco Technology, Inc. Remote memory access using memory mapped addressing among multiple compute nodes
US10664169B2 (en) 2016-06-24 2020-05-26 Cisco Technology, Inc. Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device
US11563695B2 (en) 2016-08-29 2023-01-24 Cisco Technology, Inc. Queue protection using a shared global memory reserve
US10545914B2 (en) 2017-01-17 2020-01-28 Cisco Technology, Inc. Distributed object storage
US10243823B1 (en) 2017-02-24 2019-03-26 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US10713203B2 (en) 2017-02-28 2020-07-14 Cisco Technology, Inc. Dynamic partition of PCIe disk arrays based on software configuration / policy distribution
US10254991B2 (en) 2017-03-06 2019-04-09 Cisco Technology, Inc. Storage area network based extended I/O metrics computation for deep insight into application performance
US10592326B2 (en) 2017-03-08 2020-03-17 Axxana (Israel) Ltd. Method and apparatus for data loss assessment
US10303534B2 (en) 2017-07-20 2019-05-28 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US10404596B2 (en) 2017-10-03 2019-09-03 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US10942666B2 (en) 2017-10-13 2021-03-09 Cisco Technology, Inc. Using network device replication in distributed storage clusters
JP2022169276A (en) * 2021-04-27 2022-11-09 横河電機株式会社 Redundancy method, redundancy program, and information processing device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014370A (en) * 1998-02-20 2000-01-11 Nippon Telegraph & Telephone Corporation Apparatus for bridging between fibre channel networks and ATM network
US6233626B1 (en) * 1998-10-06 2001-05-15 Schneider Automation Inc. System for a modular terminal input/output interface for communicating messaging application layer over encoded ethernet to transport layer
US20020010813A1 (en) * 1997-12-31 2002-01-24 Hoese Geoffrey B. Storage router and method for providing virtual local storage

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5805805A (en) * 1995-08-04 1998-09-08 At&T Corp. Symmetric method and apparatus for interconnecting emulated lans
US6085238A (en) * 1996-04-23 2000-07-04 Matsushita Electric Works, Ltd. Virtual LAN system
US5894481A (en) * 1996-09-11 1999-04-13 Mcdata Corporation Fiber channel switch employing distributed queuing
US5905873A (en) * 1997-01-16 1999-05-18 Advanced Micro Devices, Inc. System and method of routing communications data with multiple protocols using crossbar switches
US6185203B1 (en) * 1997-02-18 2001-02-06 Vixel Corporation Fibre channel switching fabric
US6000020A (en) 1997-04-01 1999-12-07 Gadzoox Networks, Inc. Hierarchical storage management from a mirrored file system on a storage network segmented by a bridge
JP3228182B2 (en) * 1997-05-29 2001-11-12 株式会社日立製作所 Storage system and method for accessing storage system
US6085253A (en) * 1997-08-01 2000-07-04 United Video Properties, Inc. System and method for transmitting and receiving data
US5996024A (en) * 1998-01-14 1999-11-30 Emc Corporation Method and apparatus for a SCSI applications server which extracts SCSI commands and data from message and encapsulates SCSI responses to provide transparent operation
US6697846B1 (en) * 1998-03-20 2004-02-24 Dataplow, Inc. Shared file system
US6021454A (en) * 1998-03-27 2000-02-01 Adaptec, Inc. Data transfer between small computer system interface systems
US6148414A (en) * 1998-09-24 2000-11-14 Seek Systems, Inc. Methods and systems for implementing shared disk array management functions
US6738821B1 (en) * 1999-01-26 2004-05-18 Adaptec, Inc. Ethernet storage protocol networks
US20090052461A1 (en) * 2007-08-21 2009-02-26 Ibm Corporation Method and Apparatus for Fibre Channel Over Ethernet Data Packet Translation Via Look up Table Conversion Bridge in a Network System

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020010813A1 (en) * 1997-12-31 2002-01-24 Hoese Geoffrey B. Storage router and method for providing virtual local storage
US6014370A (en) * 1998-02-20 2000-01-11 Nippon Telegraph & Telephone Corporation Apparatus for bridging between fibre channel networks and ATM network
US6233626B1 (en) * 1998-10-06 2001-05-15 Schneider Automation Inc. System for a modular terminal input/output interface for communicating messaging application layer over encoded ethernet to transport layer

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9647954B2 (en) 2000-03-21 2017-05-09 F5 Networks, Inc. Method and system for optimizing a network by independently scaling control segments and data flow
US9077554B1 (en) 2000-03-21 2015-07-07 F5 Networks, Inc. Simplified method for processing multiple connections from the same client
US20100309930A1 (en) * 2007-12-20 2010-12-09 Neil Harrison Adaptation scheme for communications traffic
US8615022B2 (en) * 2007-12-20 2013-12-24 British Telecommunications Public Limited Company Client/server adaptation scheme for communications traffic
US9077560B2 (en) 2007-12-20 2015-07-07 British Telecommunications Public Limited Company Adaptation scheme for communications traffic
US20100309924A1 (en) * 2007-12-20 2010-12-09 Neil Harrison Client/server adaptation scheme for communications traffic
US11108815B1 (en) 2009-11-06 2021-08-31 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US10721269B1 (en) 2009-11-06 2020-07-21 F5 Networks, Inc. Methods and system for returning requests with javascript for clients before passing a request to a server
US9141625B1 (en) 2010-06-22 2015-09-22 F5 Networks, Inc. Methods for preserving flow state during virtual machine migration and devices thereof
US10015286B1 (en) 2010-06-23 2018-07-03 F5 Networks, Inc. System and method for proxying HTTP single sign on across network domains
USRE47019E1 (en) 2010-07-14 2018-08-28 F5 Networks, Inc. Methods for DNSSEC proxying and deployment amelioration and systems thereof
US8886981B1 (en) 2010-09-15 2014-11-11 F5 Networks, Inc. Systems and methods for idle driven scheduling
US9554276B2 (en) 2010-10-29 2017-01-24 F5 Networks, Inc. System and method for on the fly protocol conversion in obtaining policy enforcement information
US10135831B2 (en) 2011-01-28 2018-11-20 F5 Networks, Inc. System and method for combining an access control system with a traffic management system
US9246819B1 (en) * 2011-06-20 2016-01-26 F5 Networks, Inc. System and method for performing message-based load balancing
US9985976B1 (en) 2011-12-30 2018-05-29 F5 Networks, Inc. Methods for identifying network traffic characteristics to correlate and manage one or more subsequent flows and devices thereof
US9270766B2 (en) 2011-12-30 2016-02-23 F5 Networks, Inc. Methods for identifying network traffic characteristics to correlate and manage one or more subsequent flows and devices thereof
US10230566B1 (en) 2012-02-17 2019-03-12 F5 Networks, Inc. Methods for dynamically constructing a service principal name and devices thereof
US9231879B1 (en) 2012-02-20 2016-01-05 F5 Networks, Inc. Methods for policy-based network traffic queue management and devices thereof
US9172753B1 (en) 2012-02-20 2015-10-27 F5 Networks, Inc. Methods for optimizing HTTP header based authentication and devices thereof
US10097616B2 (en) 2012-04-27 2018-10-09 F5 Networks, Inc. Methods for optimizing service of content requests and devices thereof
US9641386B2 (en) 2013-01-20 2017-05-02 International Business Machines Corporation Networking device port multiplexing
US9106985B2 (en) 2013-01-20 2015-08-11 International Business Machines Corporation Networking device port multiplexing
US10375155B1 (en) 2013-02-19 2019-08-06 F5 Networks, Inc. System and method for achieving hardware acceleration for asymmetric flow connections
US10187317B1 (en) 2013-11-15 2019-01-22 F5 Networks, Inc. Methods for traffic rate control and devices thereof
US10015143B1 (en) 2014-06-05 2018-07-03 F5 Networks, Inc. Methods for securing one or more license entitlement grants and devices thereof
US11838851B1 (en) 2014-07-15 2023-12-05 F5, Inc. Methods for managing L7 traffic classification and devices thereof
US10122630B1 (en) 2014-08-15 2018-11-06 F5 Networks, Inc. Methods for network traffic presteering and devices thereof
US10666553B2 (en) * 2014-11-05 2020-05-26 Bull Sas Method for quick reconfiguration of routing in the event of a fault in a port of a switch
US20170317923A1 (en) * 2014-11-05 2017-11-02 Bull Sas Method for quick reconfiguration of routing in the event of a fault in a port of a switch
US10182013B1 (en) 2014-12-01 2019-01-15 F5 Networks, Inc. Methods for managing progressive image delivery and devices thereof
US11895138B1 (en) 2015-02-02 2024-02-06 F5, Inc. Methods for improving web scanner accuracy and devices thereof
US10834065B1 (en) 2015-03-31 2020-11-10 F5 Networks, Inc. Methods for SSL protected NTLM re-authentication and devices thereof
US10505818B1 (en) 2015-05-05 2019-12-10 F5 Networks. Inc. Methods for analyzing and load balancing based on server health and devices thereof
US11350254B1 (en) 2015-05-05 2022-05-31 F5, Inc. Methods for enforcing compliance policies and devices thereof
US10659348B2 (en) 2015-09-30 2020-05-19 International Business Machines Corporation Holding of a link in an optical interface by a lower level processor until authorization is received from an upper level processor
US9942134B2 (en) 2015-09-30 2018-04-10 International Business Machines Corporation Holding of a link in an optical interface by a lower level processor until authorization is received from an upper level processor
US11757946B1 (en) 2015-12-22 2023-09-12 F5, Inc. Methods for analyzing network traffic and enforcing network policies and devices thereof
US10404698B1 (en) 2016-01-15 2019-09-03 F5 Networks, Inc. Methods for adaptive organization of web application access points in webtops and devices thereof
US11178150B1 (en) 2016-01-20 2021-11-16 F5 Networks, Inc. Methods for enforcing access control list based on managed application and devices thereof
US10797888B1 (en) 2016-01-20 2020-10-06 F5 Networks, Inc. Methods for secured SCEP enrollment for client devices and devices thereof
US10791088B1 (en) 2016-06-17 2020-09-29 F5 Networks, Inc. Methods for disaggregating subscribers via DHCP address translation and devices thereof
US11063758B1 (en) 2016-11-01 2021-07-13 F5 Networks, Inc. Methods for facilitating cipher selection and devices thereof
US10505792B1 (en) 2016-11-02 2019-12-10 F5 Networks, Inc. Methods for facilitating network traffic analytics and devices thereof
US10812266B1 (en) 2017-03-17 2020-10-20 F5 Networks, Inc. Methods for managing security tokens based on security violations and devices thereof
US10972453B1 (en) 2017-05-03 2021-04-06 F5 Networks, Inc. Methods for token refreshment based on single sign-on (SSO) for federated identity environments and devices thereof
US11122042B1 (en) 2017-05-12 2021-09-14 F5 Networks, Inc. Methods for dynamically managing user access control and devices thereof
US11343237B1 (en) 2017-05-12 2022-05-24 F5, Inc. Methods for managing a federated identity environment using security and access control data and devices thereof
US10887375B2 (en) 2017-07-13 2021-01-05 International Business Machines Corporation Shared memory device
US10447765B2 (en) 2017-07-13 2019-10-15 International Business Machines Corporation Shared memory device
US11122083B1 (en) 2017-09-08 2021-09-14 F5 Networks, Inc. Methods for managing network connections based on DNS data and network policies and devices thereof
CN107864099A (en) * 2017-10-23 2018-03-30 中国科学院空间应用工程与技术中心 A kind of flow control methods and system of isomery FC networks
US20220239765A1 (en) * 2021-01-27 2022-07-28 EMC IP Holding Company LLC Singular control path for mainframe storage
US11595501B2 (en) * 2021-01-27 2023-02-28 EMC IP Holding Company LLC Singular control path for mainframe storage

Also Published As

Publication number Publication date
US7197047B2 (en) 2007-03-27
US20070286233A1 (en) 2007-12-13
US20030091037A1 (en) 2003-05-15
US6400730B1 (en) 2002-06-04

Similar Documents

Publication Publication Date Title
US7197047B2 (en) Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
TWI446755B (en) A method for interfacing a fibre channel network with an ethernet based network
US7921240B2 (en) Method and system for supporting hardware acceleration for iSCSI read and write operations and iSCSI chimney
US7233570B2 (en) Long distance repeater for digital information
US7020715B2 (en) Protocol stack for linking storage area networks over an existing LAN, MAN, or WAN
US7599289B2 (en) Electronic communication control
US8180928B2 (en) Method and system for supporting read operations with CRC for iSCSI and iSCSI chimney
WO2001059966A1 (en) Method and apparatus for transferring data between different network devices over an ip network
US7640364B2 (en) Port aggregation for network connections that are offloaded to network interface devices
US7149819B2 (en) Work queue to TCP/IP translation
US9219683B2 (en) Unified infrastructure over ethernet
US7145866B1 (en) Virtual network devices
US20100217878A1 (en) Method, system, and program for enabling communication between nodes
EP1759317B1 (en) Method and system for supporting read operations for iscsi and iscsi chimney
US7099955B1 (en) End node partitioning using LMC for a system area network
US20050283545A1 (en) Method and system for supporting write operations with CRC for iSCSI and iSCSI chimney
US20050281261A1 (en) Method and system for supporting write operations for iSCSI and iSCSI chimney
Krueger et al. Small computer systems interface protocol over the Internet (iSCSI) requirements and design considerations
EP1158750B1 (en) Systems and method for peer-level communications with a network interface card
Krueger et al. RFC3347: Small Computer Systems Interface protocol over the Internet (iSCSI) Requirements and Design Considerations
Munson Introduction To Fibre Channel Connectivity.
Guendert Fibre Channel Standard
Bakke Network Working Group M. Krueger Request for Comments: 3347 R. Haagens Category: Informational Hewlett-Packard Corporation C. Sapuntzakis Stanford

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION