WO2002069159A1 - Storage area network using a data communication protocol - Google Patents

Storage area network using a data communication protocol Download PDF

Info

Publication number
WO2002069159A1
WO2002069159A1 PCT/US2002/008001 US0208001W WO02069159A1 WO 2002069159 A1 WO2002069159 A1 WO 2002069159A1 US 0208001 W US0208001 W US 0208001W WO 02069159 A1 WO02069159 A1 WO 02069159A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
client computer
server
generic
scsi
Prior art date
Application number
PCT/US2002/008001
Other languages
French (fr)
Inventor
Wen-Shyen Chen
John Christopher Lallier
Wayne Lam
Tat-Man Lee
Jianming Wu
Original Assignee
Falconstor Software, Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Falconstor Software, Incorporated filed Critical Falconstor Software, Incorporated
Publication of WO2002069159A1 publication Critical patent/WO2002069159A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • This invention relates generally to a system and method for connecting to server systems storage media such as disks and tapes. More particularly, this invention relates to connecting these devices using data communication protocols such as Internet Protocol (“IP”), Transmission Control Protocol (“TCP”), and/or User Datagram Protocol (“UDP”), which eliminate distance limitations between server systems and devices.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • FIGURE 1 A is a block diagram of a conventional attached-storage model.
  • the model includes a local area network (“LAN”) 100 to which is attached several servers 1 10, each schematically shown running a different operating system (such as Windows NT”", SolarisTM, Linux, etc.), and a network-attached storage (“NAS”) appliance 120.
  • LAN local area network
  • servers 1 10 each schematically shown running a different operating system (such as Windows NT”", SolarisTM, Linux, etc.), and a network-attached storage (“NAS”) appliance 120.
  • LAN local area network
  • NAS network-attached storage
  • each of the servers 1 10 is attached at least one storage device, for example, disk drives 130 and disk libraries or tape drives 140. No storage device is attached to more than one server.
  • SCSI-1 and SCSI-2 for storage are the limited number of SCSI devices (generally fewer than 8) that can be addressed by each server 110 on the LAN and the limited distance (generally less than 25 meters) allowed between the server and the storage device. These limitations make it difficult to share SCSI devices between two or more computers that are physically distant from each other.
  • FIGURE IB is a block diagram of a Fibre Channel storage attached network ("SAN") connected to the servers 110 of LAN 100.
  • SAN Fibre Channel storage attached network
  • the Fibre Channel SAN 160 allows the servers 110 to access Fibre Channel storage devices 170 or, using a Fibre Channel-SCSI bridge 180, SCSI disk drives 130 or disk libraries 140.
  • Fibre Channel technology has overcome many of the inherent SCSI limitations by enabling systems to send the SCSI protocol over a serial fiber optic bus that does not have the same limitations on number of connections, distances, and sharing that standard SCSI buses do.
  • distance limitations still exist with Fibre Channel architecture.
  • Another major drawback of implementing Fibre Channel is that it requires installing new technology between the SCSI devices and the computers, including controllers, adapters, routers and switches.
  • U.S. Patent No. 5,996,024 (the " '024 Patent”), issued Nov. 30, 1999, to Blumenau, overcomes several of these limitations. That reference discloses a host computer and a second computer linked via a high-speed network.
  • the host computer includes a network SCSI device driver, and a network SCSI applications server is executing in the second computer.
  • the high-speed network can be Megabit or Gigabit Ethernet and is able to run Internet protocol (“IP").
  • IP Internet protocol
  • SCSI commands are encapsulated by the network SCSI device driver and transmitted over the network to the server, which extracts the SCSI commands and executes the SCSI commands on the SCSI devices.
  • the reference discloses that without the use of additional hardware (e.g., by using existing Ethernet connections), the host computer is able to access a number of SCSI devices attached to the second computer as if those devices were attached to the host computer. In this manner, the reference thus appears to overcome the speed and device limitations of SCSI, while not requiring the purchase of additional hardware.
  • the '024 Patent is limited, however, in a number of ways. First, the '024 Patent is limited to accessing SCSI devices. Second, that system is limited to the actual SCSI devices already attached to the server or servers; there is no provision for adding SCSI devices to the system. Third, the '024 Patent discloses only a static arrangement of devices.
  • the present invention includes a method and a system for storing and/or
  • the system includes a client computer having data to store or desiring data to retrieve, a storage server in communication with the client computer and able to read storage-related requests from the client computer, a high-speed network running at least one data communication protocol for communicating between the client computer and the storage server, a storage device, which has data to retrieve and is used to store data, in communication with the storage server, and a storage manager for allocating and authorizing the storage device.
  • the data communication protocol includes at least one of the Internet protocols, including Internet Protocol (“IP”), Transmission Control Protocol (“TCP”), and User Datagram Protocol (“UDP").
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • the system also includes a high-speed switch for communicating between the client computer and the storage server.
  • the high-speed switch and the storage server may be integrated.
  • the storage devices include SCSI, Fibre Channel, IDE ("Integrated Drive Electronics"), and/or ramdisk, although other storage device types and protocols can easily be used.
  • the present invention also provides for mirroring, replication, and client-free server backup.
  • the present invention also provides high availability data storage.
  • the client computer is capable of operating as both a storage area network (“SAN") client and a network-attached storage (“NAS”) client.
  • SAN storage area network
  • NAS network-attached storage
  • the data communication protocol is preferably UDP running over IP
  • the data communication protocol is preferably TCP running over IP.
  • the system of the present invention since the system of the present invention ultimately stores data on and retrieves data from storage devices, the system includes much flexibility between the application in the client computer requesting storage or retrieval of data and the actual storing and retrieving of the data.
  • the system of the present invention handles storage- related commands generated in the client computer and storage server and storage-related requests and replies communicated between the client computer and storage server.
  • the client computer preferably comprises a driver which has several functions. One function is that the driver is used to enhance the reliability of UDP. Another function of the driver is to determine whether or not a storage-related command is to be communicated to the storage server.
  • the driver responds to the command, although it appears to the client computer (or, more specifically, to the client computer's operating system) as though the response to the command is coming from a storage device connected directly to the client computer.
  • the driver converts the command to a generic storage-related request that is independent of device protocols such as SCSI, IDE, or FC.
  • the storage server determines whether that request is to be communicated to the storage device. If not, for example, if the request involves a type of status request, the storage server responds to the generic storage-related request in the form of a generic storage-related reply, although it appears to the client computer as though the response to the request is coming from a storage device connected directly to the client computer.
  • the generic storage-related request is communicated to the storage device, for example, if the request involves a read or a write, the request is first converted to a storage-related command, which is different from the one generated in the client computer, and then transmitted to the storage device.
  • the storage device does not have to be a physical device, but can appear as a virtual device.
  • This virtual device can include physical devices that also include further layers of virtualization.
  • the server's storage-related command is executed on the storage device, the storage device responds to the server's storage-related command, and the storage server preferably converts this response to a generic storage-related reply for transmission back to the client computer. Again, it appears to the client computer as though the response to the original storage-related command is coming from a storage device connected directly to the client computer.
  • the system of the present invention exhibits further flexibility in that it is not limited to storing and retrieving data to or from storage devices.
  • the present invention can access a variety of peripheral devices, including, for example, scanners, tape drives, CD-ROM drives, optical disks, and printers.
  • peripheral devices including, for example, scanners, tape drives, CD-ROM drives, optical disks, and printers.
  • the communication between the client computer and the peripheral device is virtualized; thus, it appears to the client computer that the peripheral device is connected directly to the client computer.
  • the peripheral device can appear as a single virtual device, as several physical devices, or several layers and combinations of both physical and virtual.
  • the invention uses data communication protocols, it overcomes the distance limitations imposed by SCSI and Fibre Channel.
  • users of the invention are not limited in the number of computers and storage devices that can be connected together.
  • the invention can be implemented using existing SCSI and Fibre Channel infrastructure.
  • the invention eliminates the need for local client computers to reallocate storage space when storage demands do not follow a predicted plan.
  • the system creates a software implementation using standard data communication protocols over existing infrastructure to provide flexible management of storage
  • FIGURE 1 A is a block diagram of a conventional attached storage model
  • FIGURE IB is block diagram of more contemporary conventional storage model
  • FIGURE 2A is a block diagram of an embodiment of the present invention.
  • FIGURE 2B is a block diagram of another embodiment of the present invention.
  • FIGURE 3 is a block diagram of yet another embodiment of the present invention.
  • FIGURE 4 is a schematic diagram of the interaction between client computers and a storage server according to an embodiment of the present invention.
  • FIGURE 5 is a diagram illustrating a preferred embodiment of the flow of client computer operations according to an embodiment of the present invention
  • FIGURE 6 is a diagram illustrating a preferred embodiment of the flow of storage server operations according to an embodiment of the present invention.
  • FIGURE 7 is a diagram illustrating disk virtualization applications, performed according to an embodiment of the present invention.
  • FIGURE 8 is a diagram further illustrating disk virtualization according to an embodiment of the present invention.
  • FIGURE 9 is a block diagram showing several storage applications according to an embodiment of the present invention.
  • the present invention provides a virtualized approach to local storage by allowing client computers, generally attached to a LAN, MAN (metropolitan area network), or WAN (wide area network), to access many and varied storage devices in a storage area network, thereby eliminating the storage constraints imposed by a limited number of storage devices.
  • a storage server acts as the intermediary between the client computers and the storage devices, communicates with the client computers preferably using standard Internet protocols, and stores the client computers' data in the storage resources in an efficient manner.
  • the present invention uses a variety of data communication protocols, examples of which are more commonly known today as Internet protocols. Three of these protocols are IP, TCP, and UDP, each of which has specific qualities and characteristics. These data communication or Internet protocols can be used for transmission on or off the public Internet. In addition, these protocols, unlike general broadcast protocols such as IPX ("Internetwork Packet Exchange") and NetBios, can be routed to individual machines. Moreover, these protocols allow for scalability of networks, in terms of distance, speed, and number of connections. Although the Internet protocols exist today, and are used as exemplary protocols in this description, the present invention is intended to cover other data communication protocols exhibiting these same characteristics. "Internet Protocol,” commonly known as "IP,” is the most popular method to move data over the public Internet. IP is a connectionless protocol (i.e., it does not require that a
  • IP addresses and routes packets through the network, and fragments and reassembles packets split up in transit. For purposes of this invention, this protocol is insufficient for moving storage traffic. In addition, IP does not guarantee delivery of the packet sequence.
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • TCP header contains information regarding flow control, sequencing, and error checking.
  • TCP is not ideally suited for storage because of the significant overhead required and the negative effect on performance.
  • UDP User Datagram Protocol
  • TCP Transmission Control Protocol
  • the present invention also integrates NAS functionality into SAN technology using a SAN built with a data communication protocol, such as IP, TCP, or UDP.
  • IP Internet Protocol
  • NAS and SAN operate through two different protocols.
  • NAS uses the Server Message Block ("SMB") or Network File System (“NFS”) protocol to share files and folders.
  • SMB Server Message Block
  • NFS Network File System
  • NAS client computers use a network redirector to access the shared files and folders.
  • the SAN preferably uses the SCSI protocol to share storage devices, although other protocols, such as IDE, may also be used.
  • SAN client computers use device drivers to access storage blocks on the devices.
  • the storage server manages the physical storage devices and communicates with the client computers using IP.
  • the storage server converts generic SAN-type storage requests into requests for the storage devices attached to the storage server.
  • the storage server packages responses from the storage devices and sends them back to the client computer that made the request.
  • the storage server can also implement a file sharing protocol, such as SMB or NFS.
  • the storage server services block storage requests for SAN-style client computers as well as file sharing for NAS-style client computers.
  • a storage server implementing NAS storage access can utilize SAN features to implement more reliable and more available NAS storage through mirroring, high availability server storage, and point-in-time snapshot.
  • one embodiment of the SAN of the present invention consists of one or more client computers 210, each able to act as both a SAN client 205 and a NAS client 215, connected to a computer network 200, such as a LAN, MAN, or WAN, a storage manager 280, one or more storage servers 240, and storage devices 250.
  • a computer network 200 such as a LAN, MAN, or WAN
  • a storage manager 280 one or more storage servers 240, and storage devices 250.
  • SAN client computers 110 were called “servers.”
  • computers 210 although technically “servers” within the LAN, MAN, or WAN, are “clients” of storage server 240 and will be referred to as such.
  • Computer network 200 may be based on Ethernet technology, including, but not limited to, Ethernet, Fast Ethernet, and Gigabit Ethernet.
  • Client computers 210 are connected to storage server 240 via data communication protocol (e.g., IP, TCP, and/or UDP) lines 220 and high-speed network 230.
  • High-speed network 230 preferably operates at speeds of 10 Mbps or greater.
  • Storage server 240 i connected to storage devices 250 via standard storage interface 260, which can include SCSI, Fibre Channel, IDE, ESCON ® ("Enterprise Systems Connection"), or any other standard storage interface.
  • Storage manager 280 is shown as being connected to storage server 240, but is generally a piece of software that manages the storage devices. The configuration shown in FIGURE 2A eliminates NAS appliance 120 of the prior art systems.
  • storage manager 280 sets up, configures, and helps troubleshoot the SAN, keeping track of the components attached to the storage server, and controlling virtualization.
  • storage manager 280 does not have to be a separate computer or device, as shown in FIGURE 2A, but can be run from one of client computers 210.
  • Storage manager 280 is preferably implemented as a JAVA-based console.
  • Storage manager 280 organizes and specifically allocates and authorizes storage resources to client computers 210 on network 200. During the authorization and allocation process, authentication credentials are established using, for example, public key cryptography. Each storage resource is uniquely identified by server name/device ID.
  • Storage manager 280 organizes and manages the storage resources as "virtualized," where the devices are mapped to real physical geometry. Authorized clients 210, when accessing allocated resources, require strict authentication with proper credentials, using standard IP.
  • FIGURE 2B shows a preferred embodiment of the invention which also includes a high-speed switch 270, using Gigabit Ethernet, for example, where storage manager 280 is shown as being connected to the high-speed switch.
  • High-speed switch 270 like high-speed network 230, preferably operates at speeds of 10 Mbps or greater.
  • Storage devices 250 can include SCSI devices 290 (e.g., disk drives 130, disk libraries 140, and optical storage 150) as well as FC storage devices 170 attached to FC SAN 160. Both SCSI and FC devices are connected to storage server 240 via interfaces 260.
  • Networks may exist, so long as client computers 210 have a network interface supporting one of the data communication protocols (e.g., IP, TCP, or UDP), and the protocol packets that carry data on the network are routable between any node in the network.
  • data communication protocols e.g., IP, TCP, or UDP
  • FIGURE 3 illustrates an alternative embodiment to that in FIGURE 2B in which the switch 270 and the storage server 240 are integrated into a switch/server combination 300.
  • storage manager 280 is shown as a separate device in FIGURE 3, as discussed with respect to FIGURE 2A above, storage manager 280 can be implemented as software on any of clients 210.
  • Storage server 240 supports two types of data storage protocols.
  • the first type, for SAN clients, is a "Device Level Access Protocol" ("DLAP") that transmits and responds to requests for data to be read or written on a block level to storage devices 290 and 170.
  • DLAP Device Level Access Protocol
  • “Device level” access means that the requests are addressed by sector or block number on the storage device.
  • the SCSI interfaces make storage requests on a block level.
  • the second type of data storage protocol, for NAS clients is a "File Level Access Protocol" (“FLAP").
  • This protocol type includes the NAS protocols SMB (Server Message Block) and NFS (Network File Service), as well as Common Internet File System ("CIFS"). These protocols permit shared access to files and folders on a file system. Both data storage protocol types are implemented using IP. The control requests and data transmitted using these data storage protocols are packaged into IP packets. Using IP as the network protocol allows the data storage control requests and data to be routed and transmitted in a wide variety of configurations. including LAN, WAN, and MAN. A client computer 210 can access storage on the storage area network using either FLAP or DLAP or both.
  • FIGURE 4 illustrates an exemplary operation of the invention, for either a SAN client 205 or a NAS client 215.
  • the figure displays an implementation in which SCSI storage- related commands are generated in the client computer and the storage server, but the ability to generate other types of storage-related commands will also be described.
  • Both SAN clients 205 and NAS clients 215 include application layer 410 and operating system 420.
  • SAN clients 205 also include a device driver layer which is shown in FIGURE 4 as including local device driver 430 and SCSI device driver 435a.
  • the device driver layer can also include additional device drivers for each type of storage device that the operating system contemplates using.
  • local device driver 430 would be a SCSI device driver if disk 130a is a SCSI disk.
  • disk 130a is an IDE disk
  • local device driver 430 would be an IDE device driver.
  • SCSI device driver 435a is shown because it is generally preferable to present a SCSI interface to the operating system; however, it is just as easy, and may be more preferable in certain situations, to present an IDE interface to the operating system, generally in addition to the SCSI interface, but possibly instead of the SCSI interface. In such situations, the device layer would include an IDE device driver. (Analogously, other protocols such as Fibre Channel could also have an FC device driver.
  • Local device driver 430 drives host adapter driver 465a
  • SCSI device driver 435a drives SAN SCSI driver 440. If an IDE device driver were included, it would drive a "SAN IDE" driver.
  • SAN/IP driver 445a also include SAN/IP driver 445a, UDP driver 450a, IP driver 455a, and NIC (Network Interface Card) driver 460a.
  • SAN/IP driver 445a is not dedicated to a protocol-specific driver in the layer above it.
  • SAN/IP driver 445a would also be able to accommodate the SAN IDE driver that interfaces with that IDE device driver.
  • NAS clients 215 of the invention also include network redirector 470, TCP driver 475a, IP driver 455b, and NIC (Network Interface Card) driver 460b.
  • IP drivers 455a, 455b are identical and NIC drivers 460a, 460b are identical and serve to place IP and network headers, respectively, onto network commands.
  • Storage server 240 includes provisions for communicating with both SAN and NAS clients.
  • IP driver 455c and NIC driver 460c are complementary to IP drivers 455a, 455b and NIC drivers 460a, 460b, respectively, and serve to remove the IP and network headers from network commands.
  • storage server 240 also includes UDP driver 450b and SAN/IP driver 445b.
  • UDP driver 450b For SAN clients, storage server 240 also includes TCP driver 475b, SMB NFS driver 480, NAS interpreter 485, and management services layer 496. Both types of clients use I/O core virtualization layer 490 and I/O core storage services layer 492, as well as SCSI device driver 435b (if disk 130b is a SCSI device) and host adapter driver 465b.
  • SCSI device driver 435a and host adapter driver 465a are identical to SCSI device driver 435a and host adapter driver 465a, respectively, except that they operate under the operating system of the storage server, here shown as Linux 498, but other operating systems such as Windows NT 1 " and AIX may easily be used.
  • an IDE device driver is included in the device driver layer of the storage server, and a host adapter driver adapted to the IDE device will interface between the IDE device driver and the IDE device.
  • a conventional client computer having only local storage includes an application layer, an operating system layer, and a device driver layer.
  • the operating system determines which devices it can access via instructions it receives from the device drivers, which in turn have received instructions from the layer below.
  • application 410 desires to store data to or retrieve data from a storage device
  • the application sends a storage request, illustrated by flow line 401, to operating system 420 and directed to that specific device.
  • the operating system In order to accommodate the application's request, the operating system must make a series of storage-related requests. Some of these storage-related requests include reading or writing data from the storage device, whereas others include inquiry or status or capacity requests. Based on what the application tells the operating system, for any one of these operating system storage- related requests, the operating system is to decide from which device it should make the request.
  • FIGURE 4 shows flow line 401 splitting into flow lines 405 and 415.
  • a storage- related request will follow flow line 405 if the operating system wants to address local device 130a, as might occur in a conventional client computer. In such a situation, the operating system sends the storage-related request to local device driver 430, which extracts information from the storage-related request and translates the request into commands that can be read by host adapter driver 465a. Host adapter driver 465a reads those commands and executes those commands on device 130a, which sends a response back to operating system 420 along flow line 405. All requests made from operating system 420 are executed on disk 130a.
  • a storage-related request will follow flow line 415 if the operating system wants to address a device specified by SCSI device driver 435a.
  • SCSI device driver 435a (or an IDE device driver) is found in conventional client computers. On initialization, however, SCSI device driver 435a was instructed by SAN SCSI driver 440, which is part of the present invention, to forward all of the SCSI device driver commands to it. The operating system does not know that the device ultimately addressed by SCSI device driver 435a is not physically attached to the client computer. To the operating system, all storage-related requests that it generates for storage device access are treated the same, i.e., they are sent to a device driver and the device driver returns some response.
  • SCSI device driver 435a translates the storage-related request from the operating system to a storage-related SCSI command or commands that can be read by SAN SCSI driver 440.
  • SAN SCSI driver 440 determines whether the storage-related request is to be responded to by the storage server or can be responded to locally. If it is to be responded to locally, SAN SCSI driver 440 can generate a local response and send it back to SCSI device driver 435a.
  • SAN SCSI driver 440 provides a first level of virtualization of the present invention.
  • SAN SCSI driver 440 translates the storage-related SCSI commands into more generic storage-related requests that can be read by SAN/IP driver 445a, also forming part of the present invention. Because these generic storage-related requests are not protocol-specific, a SAN IDE driver that receives IDE commands from an IDE device driver would translate the IDE commands to these more generic
  • SAN/IP driver 445a provides a second level of virtualization of the present invention.
  • SAN/IP driver 445a takes the generic storage-related request and prepares it for transmission over the high-speed network, including enhancing the reliability of UDP.
  • the generic storage-related request is sent to UDP driver 450a, then to IP driver 455a, and then to NIC driver 460a to transmit the generic storage-related request over high-speed network 230 to storage server 240.
  • network redirector 470 In the operation of the NAS client of the present invention, as shown by NAS client flow line 425, the request to communicate with a storage device is transmitted to network redirector 470, which is generic to a NAS client.
  • Network redirector 470 is software whose purpose is to redirect tasks or input/output (I/O) to network devices such as disks and printers.
  • network redirector 470 receives a task or I/O request, it determines whether the task or request will be performed locally or sent over the network to a remote device.
  • Network redirector 470 then sends the request to TCP driver 475a, then to IP driver 455b, and then to NIC driver 460b to transmit the request over high-speed network 230 to storage server 240.
  • both SAN and NAS generic storage-related requests are received by NIC driver 460c and sent to IP driver 455c.
  • a NAS request is then sent to TCP driver 475b, which is then received by SMB NFS driver 480, and then received by NAS interpreter 485 which sends a storage-related command to I/O core virtualization layer 490.
  • a SAN generic storage-related request is sent to UDP driver 450b and then to SAN/IP driver 445b which prepares it to be read by I/O core virtualization layer 490, the operation of which will be described below.
  • Both SAN and NAS requests are sent to I/O core storage services layer 492 for any interaction with the storage devices, from simple storage or retrieval to one of the storage services such as client-free server backup, replication, mirroring, or high availability are required.
  • a NAS request may also be transmitted, as shown by flow line 495, to management services layer 496 in order to configure NAS resources.
  • the management services available here are similar to those available using storage manager 280 for SAN clients. If applicable, commands destined for storage devices are sent back through I/O core virtualization layer 490 to SCSI device driver 435b, to host adapter driver 465b, and then to SCSI devices 130b, 140.
  • the generic commands would be converted to IDE commands in I/O core virtualization layer 490, transmitted to an IDE driver, then to a host adapter driver specific for IDE devices, and then to the IDE devices for data storage or retrieval.
  • the storage devices send a response back to I/O core virtualization layer 490 along flow lines 415, 425, and a generic reply is generated and sent back across high-speed network 230 to the respective client computer, and ultimately to the operating system. Because of virtualization, client computers 210 do not know that a locally-attached disk did not process the request. This is another level of virtualization of the present invention.
  • SAN client 205 operates as follows.
  • Client computer 210 includes software composed of a job control daemon, located in operating system 420, and a kernel driver (represented by SAN SCSI driver 440).
  • the job control daemon initializes and sets up the control and monitoring operation of SAN SCSI driver 440, which then implements SAN/IP driver 445a.
  • Operating system 420 discerns no difference between SAN/IP driver 445a and any other standard SCSI host adapter driver 465a.
  • SAN/IP driver 445a uses standard IP service to connect to the authorized storage server 240. Allocated storage resources from storage server 240 are identified by server name/device ID, which is mapped to the local SCSI ID and logical unit number ("LUN”) under SAN/IP driver 445a.
  • This adapter/SCSI ID/LUN represents the remote storage device 130b to the local operating system as a locally-attached SCSI resource.
  • the daemon validates to the server a communications pathway, creating an access token for all subsequent SAN client-storage server communications.
  • the storage resources now appear to operating system 420 as locally-attached storage.
  • Operating system 420 then issues storage-related requests, which are converted in SCSI device driver 435a to storage-related commands, which are then directed in SAN SCSI driver 440 to a specific SCSI ID and LUN inside SAN/IP driver 445a.
  • SAN SCSI driver 440 receives the storage-related request and then maps the SCSI ID/LUN to the server name/device ID so that the remote storage resource (e.g., 130b) can be identified.
  • a generic storage-related request package is created in the form of an open network computing remote procedure call ("ONC-RPC" or just "RPC”).
  • the RPC package contains the access token, the original generic storage-related command, and the target device ID.
  • this package is sent through a layer in SAN/IP driver 445a that enhances the reliability of UDP, before being transmitted to UDP driver 450a for a UDP header, IP driver 455a for an IP header, and NIC driver 460a for transmission over the network 230.
  • This package is sent to storage server 240, where storage server 240 is listening over IP.
  • a generic storage-related request from authorized client 205 arrives at storage server 240, the server uses the device ID to identify the local physical storage resources associated with the particular device ID. Because the request is virtualized, the embedded command is interpreted and re-mapped to the appropriate physical device and physical sectors. Multiple commands (e.g., SCSI, IDE) may have to be executed to service a single request. The result of the execution is packaged as a generic storage-related reply to the client. This reply, when received by client 205, is converted to a local SCSI (or IDE) completion response and returned to the operating system to complete the request.
  • SCSI SCSI
  • IDE IDE
  • a SAN client request supports device level access protocol (DLAP). If client computer 205 needs device level access, it will use SAN/IP driver 445a.
  • Most computer operating systems define a standard application program interface ("API") for accessing SCSI storage devices.
  • SAN/IP driver 445a implements the standard API for the particular operating system (e.g., Windows NT®, SolarisTM, Linux, etc.). This makes the DLAP data storage interface appear to the operating system as would a locally-attached SCSI storage device.
  • Operating system 420 reads and writes data to the remote storage device 130b attached to the SAN in the same way as it does to locally-attached SCSI storage . device 130a.
  • SAN/IP driver 445a converts the request into IP packets.
  • SAN/IP driver 445a examines the request to determine for which storage server 240 the request was meant, and then delivers the storage-related request, in IP packet form, to storage server or servers 240 that need to service the request.
  • storage-related requests may involve multiple storage devices 130, 140, 150.
  • Storage server 240 receives the DLAP requests over the network, converts the IP packets into SCSI commands, and issues these new commands to the storage device or devices 130, 140, 150 attached to storage server 240.
  • Storage server 240 receives the data and/or status from storage device 130, 140, 150 and constructs a response for the client computer 205 that made the request.
  • the storage server converts the response into IP packets and sends the response back to SAN/IP driver 445a on the client computer 205 that made the initial request.
  • SAN/IP driver 445a on client computer 205 converts the IP packets into a response for operating system 420.
  • the response is returned to operating system 420 through the API that initiated the request. Because of the device driver that acts like a virtual SCSI device driver (i.e. SAN/IP driver 445a), operating system 420 believes that the requests are all being responded to locally.
  • a NAS client 215 uses file level access as the standard access protocol. If the NAS client 215 needs file level access, it implements a FLAP such as System Message Block ("SMB”), Common Internet File System (“CIFS”), a variant of SMB, and Network File System (“NFS”). These protocols are well defined and in common use for accessing network-attached storage in computer networks such as the one employed in the present invention.
  • Client computer 215's operating system 420 makes the FLAP storage-related requests using SMB/CIFS or NFS. These protocols are implemented with the TCP/IP protocol, and thus can be transmitted to storage server 240.
  • Storage server 240 implements one or more of the FLAPs as a server, i.e., it handles requests from clients 215 that make requests. Requests for files and or folder information are sent back to the client 215 using the FLAP (e.g., SMB/CIFS or NFS) that initiated the request.
  • the storage server manages a file system, which is a system of managing computer files and folders for use by computer programs. The file system holds the files and folders shared through the FLAP.
  • the file system typically is stored and maintained on remote disk storage 130, 140, 150, such as those connected to storage server 240.
  • the file service stores and retrieves data for the file system using storage server 240.
  • FIGURES 5 and 6 are flowcharts that illustrate a preferred embodiment of the present invention client and storage server operations.
  • FIGURE 5 describes the client's operations with respect to how the configuration of FIGURE 4, which includes SCSI device driver 435a and SAN SCSI driver 440, would operate. (IDE requests could also be made using an IDE device driver and a SAN IDE driver.)
  • SRB SCSI request block
  • SAN SCSI driver 440 extracts the SCSI storage-related command from the SRB.
  • SAN SCSI driver 440 retrieves from the SRB SCSI target information (SCSI ID/LUN) and transforms it into a virtual device identifier.
  • SAN SCSI driver 440 analyzes the SCSI storage-related command to determine if it is to be completed locally or remotely.
  • a locally-completed command include a command such as an "Inquiry” or "Report LUN" command, using SCSI commands as an example. If the storage-related command is to be completed locally, in step 530 SAN SCSI driver 440 produces an appropriate response to the SCSI storage-related command, without use of storage server 240. Step 595 then returns to operating system 420.
  • SAN SCSI driver 440 converts the storage-related command into a generic storage-related request, as follows.
  • Step 535 converts the storage-related command, in this case a SCSI command, to a generic storage-related command.
  • Step 540 asks whether the generic command is a "Read” or "Write”— type (i.e., device) command. If so, step 545 retrieves the sector address and sector count from the SRB to prepare for transmission to storage server 240 via an RPC.
  • step 550 converts the generic storage-related command to a generic storage-related request by putting the virtual device identifier together with the generic storage-related
  • Generic non-Read/Write commands might include "Ready,” “Identify,” and “Capacity,” which, if ultimately executed on a SCSI device, might translate as SCSI storage- related commands to "Test Unit Ready,” “Get Device Information,” and “Get Capacity,” or, in some cases, may result in the execution of multiple SCSI commands on the SCSI device.
  • a generic storage-related request is used because the present invention is designed to support device protocols other than just SCSI (e.g., IDE, FC, ESCON).
  • a generic storage-related request derived from a device protocol other than SCSI is generated in a manner analogous to that illustrated in FIGURE 5, and the resulting request generated in step 550 would have the same format regardless of the initial protocol used.
  • SAN/IP driver 445a asks whether the client is verified and authenticated and connected to storage server 240. If not, step 560 verifies and authenticates the client and connects to storage server 240. Once the client is verified, authenticated, and connected, step 565 enhances the reliability of the UDP protocol and transmits the generic storage-related request via RPC to storage server 240.
  • step 570 After storage server 240 acts on the request as described in the flowchart of FIGURE 6, the completed results and/or responses are received by SAN/IP driver 445a from storage server 240 in step 570. The response is sent back to SAN SCSI driver 440 which, in step 575, translates or formats appropriately the received results/responses, and returns the response to operating system 420 in step 595.
  • FIGURE 6 describes the storage server's operations.
  • SAN/IP driver 445b in storage server 240 receives the generic storage-related request.
  • SAN/IP driver 445b attempts to verify and authenticate the request.
  • I/O core virtualization layer 490 takes over and asks if the verification succeeded. If so, step 625 converts the generic storage-related request to a generic storage-related command.
  • Step 630 then asks whether the generic storage-related command requires access to a device for completion. The answer to this is yes if the generic storage-related command is one such as a Read or a Write.
  • step 635 produces the appropriate results and/or responses and packages the results into a generic storage-related reply.
  • a generic storage-related reply is also generated if the verification/authentication in step 620 failed. Step 695 then transmits this generic storage-related reply back to the client that sent the request.
  • Step 640 asks whether the generic storage-related command is directed to a SCSI, IDE, or other target device. If SCSI, step 645 converts the generic storage-related command to appropriate physical SCSI command equivalents. Likewise, if IDE, step 650 converts the generic storage- related command to appropriate physical IDE command equivalents. Similarly, if the generic storage-related command is directed to a target device operating under another protocol, step 655 converts the generic storage-related command to appropriate physical command equivalents for that other protocol. Step 660 then executes the commands on the target devices, and step 665 asks whether all of the necessary commands have been executed.
  • step 670 packages the results and/or responses into a generic storage-related reply which step 695 then transmits back to the client that sent the request.
  • FIGURES 7 and 8 illustrate the virtualization capabilities of the present invention.
  • three physical disks 710 e.g., 20 GB each, are mapped to a 60 GB virtual disk 720.
  • a single physical disk 730 e.g., 80 GB, is mapped to two 40 GB virtual disks 740.
  • FIGURE 8 illustrates several layers of disk virtualization on the client side and the server side.
  • client side On the client side are two logical disks 810, 820, both starting at logical sector address 0.
  • server side is a single virtual device 830.
  • virtual device 830 is shown as a disk, however, virtual device 830 (as well as logical disks 810, 820) are not limited to disks, but can be another device type of device, such as a scanner, a tape drive, a CD-ROM drive, or an optical drive, which is capable of being virtualized.
  • one virtual device 830 is made up of two physical disks 840, 850. Each physical disk 840, 850 includes a partition 842, 852, respectively, which defines the mapping of the respective disk.
  • physical disk 840 includes two virtual disks, 844 and 846
  • physical disk 850 includes a used area 854 and a virtual disk 856.
  • Logical disk 810 is mapped to virtual disk 844 in physical disk 840, and its logical sector address 0 is mapped starting at physical sector address N, located below partition 842.
  • Logical disk 820 is mapped to two virtual disks, 846, 856, each of which resides on different physical disks, 840, 850, respectively. Because of the presence of partition 842 and virtual disk 844, logical sector address 0 of logical disk 820 is mapped starting at physical sector address M of physical disk 840.
  • the client operation is the same.
  • the client receives a SCSI storage-related command from the operating system (steps 510-515).
  • the SCSI storage-related command is converted to a generic storage-related request (steps 535-550) and sent over the high-speed network to storage server 240 (step 565).
  • the generic storage-related request is received by storage server 240 (step 610).
  • the generic storage-related request is transformed into a command for the server's SCSI drive, but at a sector address different from that specified in the original SCSI storage-related command in the client. That server command is then executed on that SCSI device.
  • the storage server For the dual-drive operation using logical disk 820, the storage server first determines that the SCSI storage-related command is requesting an storage-related operation that requires two drives to complete.
  • the generic storage-related request is transformed into two commands, one for each of the server's physical SCSI drives, and both at sector addresses different from that specified in the original SCSI storage-related command in the client.
  • the commands are then executed on those SCSI devices.
  • the virtualized resource also works for NAS applications. Incorporating NAS client 215 access in the virtualized storage design offers file system access to NAS clients 215 over the Gigabit Ethernet infrastructure without impacting the user LAN bandwidth. In addition, virtualization, high-speed backup snapshot, mirroring, and replication are also available to NAS clients using the existing production-user LAN.
  • FIG. 9 The SAN in FIG. 9 is similar to the SANs depicted in FIGS. 2A and 2B.
  • Standalone storage server 240 is connected to high-speed switch 270.
  • a group of disk drives 130 is connected to standalone storage server 240 via storage interface 260.
  • FIG. 9 also includes, in addition to standalone storage server 240, high availability group 910 connected to high-speed switch 270.
  • High availability group 910 consists of two storage servers 240a, 240b connected together, each identical to standalone storage server 240. Attached to high availability group 910 via storage interlace 260 are disk library 140 and several disk drives 130. Two of these disk drives 130 comprise mirrored pair 920, and the other disk drive 130 is included as part of replication group 930 with one of the disk drives 130 daisy-chained to standalone storage server 240.
  • Mirroring occurs when two drives are written to at the same time. This is illustrated in FIG. 9 using mirrored pair 920.
  • Mirroring arrow 925 indicates that each file or block that is saved to one disk drive 130 of the pair is also saved to the other disk drive 130 of the pair, thus providing redundancy of data in the event one of the disk drives fails.
  • replication redundant disk drives
  • the contents of one disk drive 130 are periodically copied to the other disk drive 130.
  • Replication arrow 935 indicates that at certain times, either scheduled or on demand, the contents of the disk drive 130 (in replication group 930) that is daisy-chained to standalone server 240 are copied to the disk drive 130 (in replication group 930) that is daisy-chained to high availability group 910.
  • FIG. 9 Another feature of the invention is SAN-client-free backup, indicated in FIG. 9 by arrow 945.
  • This feature allows the data from one of the disk drives 130 of mirrored pair 920, for example, to be backed up to disk library 140 attached to high availability group 910 without using the resources of client computers 210.
  • Storage servers 240a, 240b control the backup, thus allowing client computers 210 to take care of other tasks for their clients.
  • the present invention maximizes and exploits the capabilities of data communication protocols such as IP, TCP, and UDP to create an efficient and effective methodology for moving storage over Ethernet.
  • data communication protocols such as IP, TCP, and UDP
  • the present invention accommodates a variety of storage device interfaces, including SCSI, FC, IDE, and others, and is able to connect segregated islands of existing FC SANs using IP.
  • the invention breaks the traditional boundaries of device and interface protocol and delivers the expected benefits of SAN and NAS, plus additional features such as enterprise-wide storage virtualization, replication, mirroring, snapshot, backup acceleration, high-availability servers, end-to-end diagnostics, and reporting.
  • the invention can also plug security holes by supporting encryption (e.g., using hardware virtual private network (“VPN”) boxes) and providing key-based authentication, thereby eliminating the possibilities of snooping and spoofing.
  • VPN hardware virtual private network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A storage area network having a storage server (240), and a high-speed network (220) running a data communication protocol (IP, TCP, or UDP) for communication between client computer (210) and the storage server (240). The high-speed network (220) includes a high-speed switch (270). The high-speed switch (270) and the storage server (240) may be integrated. A storage manager (280) allocates and authorizes storage devices (130, 140, 150, 160). The communication between the client computer (210) and the storage devices (130, 140, 150, 160) is virtualized, the storage devices (130, 140, 150, 160) appearing to the client computer (210) to be connected directly to the client computer (210).

Description

STORAGE AREA NETWORK USING A DATA COMMUNICATION PROTOCOL
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to a system and method for connecting to server systems storage media such as disks and tapes. More particularly, this invention relates to connecting these devices using data communication protocols such as Internet Protocol ("IP"), Transmission Control Protocol ("TCP"), and/or User Datagram Protocol ("UDP"), which eliminate distance limitations between server systems and devices.
2. Description of the Related Art
Traditionally, computer systems connect directly to mass storage and peripheral devices, such as disk drives, tape drives, and CD-ROM drives, over a parallel bus or connection known as a "SCSI" bus. "SCSI," which stands for "Small Computer System Interface," is an ANSI standard for such connections. The SCSI standard includes several development generations, starting with SCSI-1, then SCSI-2 (Fast, Wide, and Fast- Wide types), and finally SCSI-3. The SCSI standard defines the physical characteristics of the connection, as well as the protocol for sending and receiving data over the connection. FIGURE 1 A is a block diagram of a conventional attached-storage model. The model includes a local area network ("LAN") 100 to which is attached several servers 1 10, each schematically shown running a different operating system (such as Windows NT"", Solaris™, Linux, etc.), and a network-attached storage ("NAS") appliance 120. To each of the servers 1 10 is attached at least one storage device, for example, disk drives 130 and disk libraries or tape drives 140. No storage device is attached to more than one server.
The primary drawbacks of using SCSI-1 and SCSI-2 for storage are the limited number of SCSI devices (generally fewer than 8) that can be addressed by each server 110 on the LAN and the limited distance (generally less than 25 meters) allowed between the server and the storage device. These limitations make it difficult to share SCSI devices between two or more computers that are physically distant from each other.
The SCSI-3 specification solves some of these problems by increasing the number of devices that can be addressed by a server 1 10 and increasing the distance between the server and the storage devices. In addition, the SCSI-3 specification is supported within other technologies such as Fibre Channel, which is another network standard with high bandwidth and increased distances of up to 10 km. FIGURE IB is a block diagram of a Fibre Channel storage attached network ("SAN") connected to the servers 110 of LAN 100. Instead of attaching storage devices to specific servers 110 as in FIGURE 1A, the Fibre Channel SAN 160 allows the servers 110 to access Fibre Channel storage devices 170 or, using a Fibre Channel-SCSI bridge 180, SCSI disk drives 130 or disk libraries 140.
Fibre Channel technology has overcome many of the inherent SCSI limitations by enabling systems to send the SCSI protocol over a serial fiber optic bus that does not have the same limitations on number of connections, distances, and sharing that standard SCSI buses do. However, distance limitations still exist with Fibre Channel architecture. Another major drawback of implementing Fibre Channel is that it requires installing new technology between the SCSI devices and the computers, including controllers, adapters, routers and switches. These and other hardware solutions have been developed to overcome the primary limitations of SCSI and Fibre Channel, but are technology-intensive, complicated, and costly.
U.S. Patent No. 5,996,024 (the " '024 Patent"), issued Nov. 30, 1999, to Blumenau, overcomes several of these limitations. That reference discloses a host computer and a second computer linked via a high-speed network. The host computer includes a network SCSI device driver, and a network SCSI applications server is executing in the second computer. The high-speed network can be Megabit or Gigabit Ethernet and is able to run Internet protocol ("IP"). SCSI commands are encapsulated by the network SCSI device driver and transmitted over the network to the server, which extracts the SCSI commands and executes the SCSI commands on the SCSI devices. The reference discloses that without the use of additional hardware (e.g., by using existing Ethernet connections), the host computer is able to access a number of SCSI devices attached to the second computer as if those devices were attached to the host computer. In this manner, the reference thus appears to overcome the speed and device limitations of SCSI, while not requiring the purchase of additional hardware.
The '024 Patent is limited, however, in a number of ways. First, the '024 Patent is limited to accessing SCSI devices. Second, that system is limited to the actual SCSI devices already attached to the server or servers; there is no provision for adding SCSI devices to the system. Third, the '024 Patent discloses only a static arrangement of devices.
What is needed is a highly flexible system that, in addition to overcoming the distance and connection limitations of SCSI, is able to access devices running a variety of protocols, including SCSI, and is able to dynamically allocate these devices so as to serve the client computers. SUMMARY OF THE INVENTION
The present invention includes a method and a system for storing and/or
retrieving data, and can be implemented using a storage area network. The system includes a client computer having data to store or desiring data to retrieve, a storage server in communication with the client computer and able to read storage-related requests from the client computer, a high-speed network running at least one data communication protocol for communicating between the client computer and the storage server, a storage device, which has data to retrieve and is used to store data, in communication with the storage server, and a storage manager for allocating and authorizing the storage device. Preferably, the data communication protocol includes at least one of the Internet protocols, including Internet Protocol ("IP"), Transmission Control Protocol ("TCP"), and User Datagram Protocol ("UDP"). Preferably, the system also includes a high-speed switch for communicating between the client computer and the storage server. The high-speed switch and the storage server may be integrated. Preferably, the storage devices include SCSI, Fibre Channel, IDE ("Integrated Drive Electronics"), and/or ramdisk, although other storage device types and protocols can easily be used. Using an additional storage device, the present invention also provides for mirroring, replication, and client-free server backup. In addition, using at least an additional storage server, the present invention also provides high availability data storage. Preferably, the client computer is capable of operating as both a storage area network ("SAN") client and a network-attached storage ("NAS") client. When operating as a SAN client, the data communication protocol is preferably UDP running over IP; when operating as a NAS client, the data communication protocol is preferably TCP running over IP. Advantageously, although the system of the present invention ultimately stores data on and retrieves data from storage devices, the system includes much flexibility between the application in the client computer requesting storage or retrieval of data and the actual storing and retrieving of the data. To this end, the system of the present invention handles storage- related commands generated in the client computer and storage server and storage-related requests and replies communicated between the client computer and storage server. The client computer preferably comprises a driver which has several functions. One function is that the driver is used to enhance the reliability of UDP. Another function of the driver is to determine whether or not a storage-related command is to be communicated to the storage server. In the case where the storage-related command is not communicated to the storage-server, for example, if the command is a type of inquiry command, the driver responds to the command, although it appears to the client computer (or, more specifically, to the client computer's operating system) as though the response to the command is coming from a storage device connected directly to the client computer. In the case where the storage-related command is communicated to the storage server, the driver converts the command to a generic storage-related request that is independent of device protocols such as SCSI, IDE, or FC.
When the storage server receives the generic storage-related request, the storage server determines whether that request is to be communicated to the storage device. If not, for example, if the request involves a type of status request, the storage server responds to the generic storage-related request in the form of a generic storage-related reply, although it appears to the client computer as though the response to the request is coming from a storage device connected directly to the client computer. In the case where the generic storage-related request is communicated to the storage device, for example, if the request involves a read or a write, the request is first converted to a storage-related command, which is different from the one generated in the client computer, and then transmitted to the storage device. In the first instance, the storage device does not have to be a physical device, but can appear as a virtual device. This virtual device can include physical devices that also include further layers of virtualization. The server's storage-related command is executed on the storage device, the storage device responds to the server's storage-related command, and the storage server preferably converts this response to a generic storage-related reply for transmission back to the client computer. Again, it appears to the client computer as though the response to the original storage-related command is coming from a storage device connected directly to the client computer.
The system of the present invention exhibits further flexibility in that it is not limited to storing and retrieving data to or from storage devices. The present invention can access a variety of peripheral devices, including, for example, scanners, tape drives, CD-ROM drives, optical disks, and printers. The communication between the client computer and the peripheral device is virtualized; thus, it appears to the client computer that the peripheral device is connected directly to the client computer. The peripheral device can appear as a single virtual device, as several physical devices, or several layers and combinations of both physical and virtual.
There are several key advantages to this invention. First, because the invention uses data communication protocols, it overcomes the distance limitations imposed by SCSI and Fibre Channel. Second, users of the invention are not limited in the number of computers and storage devices that can be connected together. Third, the invention can be implemented using existing SCSI and Fibre Channel infrastructure. Fourth, the invention eliminates the need for local client computers to reallocate storage space when storage demands do not follow a predicted plan. The system creates a software implementation using standard data communication protocols over existing infrastructure to provide flexible management of storage
by virtualization.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features and advantages of the invention will be apparent to those skilled in the art from the following detailed description of preferred embodiments, taken together with the accompanying drawings, in which:
FIGURE 1 A is a block diagram of a conventional attached storage model;
FIGURE IB is block diagram of more contemporary conventional storage model;
FIGURE 2A is a block diagram of an embodiment of the present invention;
FIGURE 2B is a block diagram of another embodiment of the present invention;
FIGURE 3 is a block diagram of yet another embodiment of the present invention;
FIGURE 4 is a schematic diagram of the interaction between client computers and a storage server according to an embodiment of the present invention;
FIGURE 5 is a diagram illustrating a preferred embodiment of the flow of client computer operations according to an embodiment of the present invention;
FIGURE 6 is a diagram illustrating a preferred embodiment of the flow of storage server operations according to an embodiment of the present invention;
FIGURE 7 is a diagram illustrating disk virtualization applications, performed according to an embodiment of the present invention; and FIGURE 8 is a diagram further illustrating disk virtualization according to an embodiment of the present invention; and
FIGURE 9 is a block diagram showing several storage applications according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS The present invention provides a virtualized approach to local storage by allowing client computers, generally attached to a LAN, MAN (metropolitan area network), or WAN (wide area network), to access many and varied storage devices in a storage area network, thereby eliminating the storage constraints imposed by a limited number of storage devices. A storage server acts as the intermediary between the client computers and the storage devices, communicates with the client computers preferably using standard Internet protocols, and stores the client computers' data in the storage resources in an efficient manner.
The present invention uses a variety of data communication protocols, examples of which are more commonly known today as Internet protocols. Three of these protocols are IP, TCP, and UDP, each of which has specific qualities and characteristics. These data communication or Internet protocols can be used for transmission on or off the public Internet. In addition, these protocols, unlike general broadcast protocols such as IPX ("Internetwork Packet Exchange") and NetBios, can be routed to individual machines. Moreover, these protocols allow for scalability of networks, in terms of distance, speed, and number of connections. Although the Internet protocols exist today, and are used as exemplary protocols in this description, the present invention is intended to cover other data communication protocols exhibiting these same characteristics. "Internet Protocol," commonly known as "IP," is the most popular method to move data over the public Internet. IP is a connectionless protocol (i.e., it does not require that a
connection be maintained during a packet exchange) that resides in the Network layer (layer 3) of the OSI model. IP addresses and routes packets through the network, and fragments and reassembles packets split up in transit. For purposes of this invention, this protocol is insufficient for moving storage traffic. In addition, IP does not guarantee delivery of the packet sequence.
"Transmission Control Protocol," commonly known as "TCP," is a connection- oriented protocol, which opens and maintains the connection between two communicating hosts on the network. When an IP packet is sent between the two hosts, a TCP header is added. This header contains information regarding flow control, sequencing, and error checking. TCP is not ideally suited for storage because of the significant overhead required and the negative effect on performance.
"User Datagram Protocol," or "UDP," a much simpler protocol than TCP, is a connectionless transport protocol used when the overhead of TCP is not needed. UDP is used to send and receive simple messages, so no session need be established.
The present invention also integrates NAS functionality into SAN technology using a SAN built with a data communication protocol, such as IP, TCP, or UDP. (For the sake of clarity, "IP" is used to denote any combination of the Internet protocols, unless otherwise noted.) NAS and SAN operate through two different protocols. NAS uses the Server Message Block ("SMB") or Network File System ("NFS") protocol to share files and folders. NAS client computers use a network redirector to access the shared files and folders. The SAN preferably uses the SCSI protocol to share storage devices, although other protocols, such as IDE, may also be used. SAN client computers use device drivers to access storage blocks on the devices.
The storage server manages the physical storage devices and communicates with the client computers using IP. The storage server converts generic SAN-type storage requests into requests for the storage devices attached to the storage server. The storage server packages responses from the storage devices and sends them back to the client computer that made the request. The storage server can also implement a file sharing protocol, such as SMB or NFS. The storage server services block storage requests for SAN-style client computers as well as file sharing for NAS-style client computers. As a result, a storage server implementing NAS storage access can utilize SAN features to implement more reliable and more available NAS storage through mirroring, high availability server storage, and point-in-time snapshot.
As illustrated in FIGURE 2A, one embodiment of the SAN of the present invention consists of one or more client computers 210, each able to act as both a SAN client 205 and a NAS client 215, connected to a computer network 200, such as a LAN, MAN, or WAN, a storage manager 280, one or more storage servers 240, and storage devices 250. (In the description of FIGURES 1A and IB, SAN client computers 110 were called "servers." In the description of embodiments of the invention as shown in FIGURE 2A and later figures, computers 210, although technically "servers" within the LAN, MAN, or WAN, are "clients" of storage server 240 and will be referred to as such.) Computer network 200 may be based on Ethernet technology, including, but not limited to, Ethernet, Fast Ethernet, and Gigabit Ethernet. Client computers 210 are connected to storage server 240 via data communication protocol (e.g., IP, TCP, and/or UDP) lines 220 and high-speed network 230. High-speed network 230 preferably operates at speeds of 10 Mbps or greater. Storage server 240 i connected to storage devices 250 via standard storage interface 260, which can include SCSI, Fibre Channel, IDE, ESCON® ("Enterprise Systems Connection"), or any other standard storage interface. Storage manager 280 is shown as being connected to storage server 240, but is generally a piece of software that manages the storage devices. The configuration shown in FIGURE 2A eliminates NAS appliance 120 of the prior art systems.
Specifically, storage manager 280 sets up, configures, and helps troubleshoot the SAN, keeping track of the components attached to the storage server, and controlling virtualization. In an alternative embodiment, storage manager 280 does not have to be a separate computer or device, as shown in FIGURE 2A, but can be run from one of client computers 210. Storage manager 280 is preferably implemented as a JAVA-based console. Storage manager 280 organizes and specifically allocates and authorizes storage resources to client computers 210 on network 200. During the authorization and allocation process, authentication credentials are established using, for example, public key cryptography. Each storage resource is uniquely identified by server name/device ID. Storage manager 280 organizes and manages the storage resources as "virtualized," where the devices are mapped to real physical geometry. Authorized clients 210, when accessing allocated resources, require strict authentication with proper credentials, using standard IP.
FIGURE 2B shows a preferred embodiment of the invention which also includes a high-speed switch 270, using Gigabit Ethernet, for example, where storage manager 280 is shown as being connected to the high-speed switch. High-speed switch 270, like high-speed network 230, preferably operates at speeds of 10 Mbps or greater. Storage devices 250 can include SCSI devices 290 (e.g., disk drives 130, disk libraries 140, and optical storage 150) as well as FC storage devices 170 attached to FC SAN 160. Both SCSI and FC devices are connected to storage server 240 via interfaces 260. Multiple configurations of networks may exist, so long as client computers 210 have a network interface supporting one of the data communication protocols (e.g., IP, TCP, or UDP), and the protocol packets that carry data on the network are routable between any node in the network.
FIGURE 3 illustrates an alternative embodiment to that in FIGURE 2B in which the switch 270 and the storage server 240 are integrated into a switch/server combination 300. Although storage manager 280 is shown as a separate device in FIGURE 3, as discussed with respect to FIGURE 2A above, storage manager 280 can be implemented as software on any of clients 210.
The figures show storage server 240 connected to storage devices 290 and 170, for example, via storage interfaces 260. Storage server 240 supports two types of data storage protocols. The first type, for SAN clients, is a "Device Level Access Protocol" ("DLAP") that transmits and responds to requests for data to be read or written on a block level to storage devices 290 and 170. "Device level" access means that the requests are addressed by sector or block number on the storage device. The SCSI interfaces make storage requests on a block level. The second type of data storage protocol, for NAS clients, is a "File Level Access Protocol" ("FLAP"). This protocol type includes the NAS protocols SMB (Server Message Block) and NFS (Network File Service), as well as Common Internet File System ("CIFS"). These protocols permit shared access to files and folders on a file system. Both data storage protocol types are implemented using IP. The control requests and data transmitted using these data storage protocols are packaged into IP packets. Using IP as the network protocol allows the data storage control requests and data to be routed and transmitted in a wide variety of configurations. including LAN, WAN, and MAN. A client computer 210 can access storage on the storage area network using either FLAP or DLAP or both.
FIGURE 4 illustrates an exemplary operation of the invention, for either a SAN client 205 or a NAS client 215. The figure displays an implementation in which SCSI storage- related commands are generated in the client computer and the storage server, but the ability to generate other types of storage-related commands will also be described. Both SAN clients 205 and NAS clients 215 include application layer 410 and operating system 420. SAN clients 205 also include a device driver layer which is shown in FIGURE 4 as including local device driver 430 and SCSI device driver 435a. The device driver layer can also include additional device drivers for each type of storage device that the operating system contemplates using. Thus, local device driver 430 would be a SCSI device driver if disk 130a is a SCSI disk. If disk 130a is an IDE disk, local device driver 430 would be an IDE device driver. Similarly, SCSI device driver 435a is shown because it is generally preferable to present a SCSI interface to the operating system; however, it is just as easy, and may be more preferable in certain situations, to present an IDE interface to the operating system, generally in addition to the SCSI interface, but possibly instead of the SCSI interface. In such situations, the device layer would include an IDE device driver. (Analogously, other protocols such as Fibre Channel could also have an FC device driver. It is intended that discussions of protocols such as IDE can include Fibre Channel or other protocols, but for simplicity, these other protocols will generally not be specifically mentioned.) Local device driver 430 drives host adapter driver 465a, whereas SCSI device driver 435a drives SAN SCSI driver 440. If an IDE device driver were included, it would drive a "SAN IDE" driver.
- u SAN clients 205 also include SAN/IP driver 445a, UDP driver 450a, IP driver 455a, and NIC (Network Interface Card) driver 460a. Note that SAN/IP driver 445a, as will be described below, is not dedicated to a protocol-specific driver in the layer above it. Thus, if SAN client 205 also included an IDE device driver on the device driver layer, SAN/IP driver 445a would also be able to accommodate the SAN IDE driver that interfaces with that IDE device driver.
NAS clients 215 of the invention also include network redirector 470, TCP driver 475a, IP driver 455b, and NIC (Network Interface Card) driver 460b. IP drivers 455a, 455b are identical and NIC drivers 460a, 460b are identical and serve to place IP and network headers, respectively, onto network commands.
Storage server 240 includes provisions for communicating with both SAN and NAS clients. IP driver 455c and NIC driver 460c are complementary to IP drivers 455a, 455b and NIC drivers 460a, 460b, respectively, and serve to remove the IP and network headers from network commands. For SAN clients, storage server 240 also includes UDP driver 450b and SAN/IP driver 445b. For NAS clients, storage server 240 also includes TCP driver 475b, SMB NFS driver 480, NAS interpreter 485, and management services layer 496. Both types of clients use I/O core virtualization layer 490 and I/O core storage services layer 492, as well as SCSI device driver 435b (if disk 130b is a SCSI device) and host adapter driver 465b. The latter two modules are identical to SCSI device driver 435a and host adapter driver 465a, respectively, except that they operate under the operating system of the storage server, here shown as Linux 498, but other operating systems such as Windows NT1" and AIX may easily be used. When storage server 240 stores data on an IDE device, an IDE device driver is included in the device driver layer of the storage server, and a host adapter driver adapted to the IDE device will interface between the IDE device driver and the IDE device.
The operation of the client and the server in FIGURE 4 will now be described. A conventional client computer having only local storage includes an application layer, an operating system layer, and a device driver layer. When the client computer initializes, the operating system determines which devices it can access via instructions it receives from the device drivers, which in turn have received instructions from the layer below. When application 410 desires to store data to or retrieve data from a storage device, the application sends a storage request, illustrated by flow line 401, to operating system 420 and directed to that specific device. In order to accommodate the application's request, the operating system must make a series of storage-related requests. Some of these storage-related requests include reading or writing data from the storage device, whereas others include inquiry or status or capacity requests. Based on what the application tells the operating system, for any one of these operating system storage- related requests, the operating system is to decide from which device it should make the request.
FIGURE 4 shows flow line 401 splitting into flow lines 405 and 415. A storage- related request will follow flow line 405 if the operating system wants to address local device 130a, as might occur in a conventional client computer. In such a situation, the operating system sends the storage-related request to local device driver 430, which extracts information from the storage-related request and translates the request into commands that can be read by host adapter driver 465a. Host adapter driver 465a reads those commands and executes those commands on device 130a, which sends a response back to operating system 420 along flow line 405. All requests made from operating system 420 are executed on disk 130a. A storage-related request will follow flow line 415 if the operating system wants to address a device specified by SCSI device driver 435a. (Similarly, the flow will go to an IDE device driver if the operating system wants to address a device specified by the IDE device driver.) SCSI device driver 435a (or an IDE device driver) is found in conventional client computers. On initialization, however, SCSI device driver 435a was instructed by SAN SCSI driver 440, which is part of the present invention, to forward all of the SCSI device driver commands to it. The operating system does not know that the device ultimately addressed by SCSI device driver 435a is not physically attached to the client computer. To the operating system, all storage-related requests that it generates for storage device access are treated the same, i.e., they are sent to a device driver and the device driver returns some response.
SCSI device driver 435a translates the storage-related request from the operating system to a storage-related SCSI command or commands that can be read by SAN SCSI driver 440. (An IDE device driver would translate the storage-related request from the operating system to a storage-related IDE command or commands that can be read by a SAN IDE driver.) SAN SCSI driver 440 determines whether the storage-related request is to be responded to by the storage server or can be responded to locally. If it is to be responded to locally, SAN SCSI driver 440 can generate a local response and send it back to SCSI device driver 435a. Thus, unlike a conventional client computer, not every storage-related request must be executed on a storage device. In this manner, SAN SCSI driver 440 provides a first level of virtualization of the present invention.
If the storage server is to respond to the storage-related request, SAN SCSI driver 440 translates the storage-related SCSI commands into more generic storage-related requests that can be read by SAN/IP driver 445a, also forming part of the present invention. Because these generic storage-related requests are not protocol-specific, a SAN IDE driver that receives IDE commands from an IDE device driver would translate the IDE commands to these more generic
storage-related requests that can be read by SAN/IP driver 445a. By converting device-specific commands to generic storage-related requests, SAN/IP driver 445a provides a second level of virtualization of the present invention.
SAN/IP driver 445a takes the generic storage-related request and prepares it for transmission over the high-speed network, including enhancing the reliability of UDP. The generic storage-related request is sent to UDP driver 450a, then to IP driver 455a, and then to NIC driver 460a to transmit the generic storage-related request over high-speed network 230 to storage server 240.
In the operation of the NAS client of the present invention, as shown by NAS client flow line 425, the request to communicate with a storage device is transmitted to network redirector 470, which is generic to a NAS client. Network redirector 470 is software whose purpose is to redirect tasks or input/output (I/O) to network devices such as disks and printers. When network redirector 470 receives a task or I/O request, it determines whether the task or request will be performed locally or sent over the network to a remote device. Network redirector 470 then sends the request to TCP driver 475a, then to IP driver 455b, and then to NIC driver 460b to transmit the request over high-speed network 230 to storage server 240.
From high-speed network 230, both SAN and NAS generic storage-related requests are received by NIC driver 460c and sent to IP driver 455c. A NAS request is then sent to TCP driver 475b, which is then received by SMB NFS driver 480, and then received by NAS interpreter 485 which sends a storage-related command to I/O core virtualization layer 490. A SAN generic storage-related request is sent to UDP driver 450b and then to SAN/IP driver 445b which prepares it to be read by I/O core virtualization layer 490, the operation of which will be described below. Both SAN and NAS requests are sent to I/O core storage services layer 492 for any interaction with the storage devices, from simple storage or retrieval to one of the storage services such as client-free server backup, replication, mirroring, or high availability are required. A NAS request may also be transmitted, as shown by flow line 495, to management services layer 496 in order to configure NAS resources. The management services available here are similar to those available using storage manager 280 for SAN clients. If applicable, commands destined for storage devices are sent back through I/O core virtualization layer 490 to SCSI device driver 435b, to host adapter driver 465b, and then to SCSI devices 130b, 140. (For requests requiring access to IDE storage devices, the generic commands would be converted to IDE commands in I/O core virtualization layer 490, transmitted to an IDE driver, then to a host adapter driver specific for IDE devices, and then to the IDE devices for data storage or retrieval.) The storage devices send a response back to I/O core virtualization layer 490 along flow lines 415, 425, and a generic reply is generated and sent back across high-speed network 230 to the respective client computer, and ultimately to the operating system. Because of virtualization, client computers 210 do not know that a locally-attached disk did not process the request. This is another level of virtualization of the present invention.
More specifically, SAN client 205 operates as follows. Client computer 210 includes software composed of a job control daemon, located in operating system 420, and a kernel driver (represented by SAN SCSI driver 440). The job control daemon initializes and sets up the control and monitoring operation of SAN SCSI driver 440, which then implements SAN/IP driver 445a. Operating system 420 discerns no difference between SAN/IP driver 445a and any other standard SCSI host adapter driver 465a. SAN/IP driver 445a uses standard IP service to connect to the authorized storage server 240. Allocated storage resources from storage server 240 are identified by server name/device ID, which is mapped to the local SCSI ID and logical unit number ("LUN") under SAN/IP driver 445a. This adapter/SCSI ID/LUN represents the remote storage device 130b to the local operating system as a locally-attached SCSI resource.
During initialization, the daemon validates to the server a communications pathway, creating an access token for all subsequent SAN client-storage server communications. The storage resources now appear to operating system 420 as locally-attached storage. Operating system 420 then issues storage-related requests, which are converted in SCSI device driver 435a to storage-related commands, which are then directed in SAN SCSI driver 440 to a specific SCSI ID and LUN inside SAN/IP driver 445a. SAN SCSI driver 440 receives the storage-related request and then maps the SCSI ID/LUN to the server name/device ID so that the remote storage resource (e.g., 130b) can be identified. Once identified, a generic storage-related request package is created in the form of an open network computing remote procedure call ("ONC-RPC" or just "RPC"). The RPC package contains the access token, the original generic storage-related command, and the target device ID. Preferably, this package is sent through a layer in SAN/IP driver 445a that enhances the reliability of UDP, before being transmitted to UDP driver 450a for a UDP header, IP driver 455a for an IP header, and NIC driver 460a for transmission over the network 230. This package is sent to storage server 240, where storage server 240 is listening over IP.
When a generic storage-related request from authorized client 205 arrives at storage server 240, the server uses the device ID to identify the local physical storage resources associated with the particular device ID. Because the request is virtualized, the embedded command is interpreted and re-mapped to the appropriate physical device and physical sectors. Multiple commands (e.g., SCSI, IDE) may have to be executed to service a single request. The result of the execution is packaged as a generic storage-related reply to the client. This reply, when received by client 205, is converted to a local SCSI (or IDE) completion response and returned to the operating system to complete the request.
Another way of looking at SAN client requests is as follows. A SAN client request supports device level access protocol (DLAP). If client computer 205 needs device level access, it will use SAN/IP driver 445a. Most computer operating systems define a standard application program interface ("API") for accessing SCSI storage devices. In the present invention, SAN/IP driver 445a implements the standard API for the particular operating system (e.g., Windows NT®, Solaris™, Linux, etc.). This makes the DLAP data storage interface appear to the operating system as would a locally-attached SCSI storage device. Operating system 420 reads and writes data to the remote storage device 130b attached to the SAN in the same way as it does to locally-attached SCSI storage . device 130a. When operating system 420 makes a storage-related request to SAN/IP driver 445a, SAN/IP driver 445a converts the request into IP packets. SAN/IP driver 445a examines the request to determine for which storage server 240 the request was meant, and then delivers the storage-related request, in IP packet form, to storage server or servers 240 that need to service the request. In some cases, storage-related requests may involve multiple storage devices 130, 140, 150. Storage server 240 receives the DLAP requests over the network, converts the IP packets into SCSI commands, and issues these new commands to the storage device or devices 130, 140, 150 attached to storage server 240. Storage server 240 receives the data and/or status from storage device 130, 140, 150 and constructs a response for the client computer 205 that made the request. The storage server converts the response into IP packets and sends the response back to SAN/IP driver 445a on the client computer 205 that made the initial request. SAN/IP driver 445a on client computer 205 converts the IP packets into a response for operating system 420. The response is returned to operating system 420 through the API that initiated the request. Because of the device driver that acts like a virtual SCSI device driver (i.e. SAN/IP driver 445a), operating system 420 believes that the requests are all being responded to locally.
In a somewhat analogous manner, a NAS client 215 uses file level access as the standard access protocol. If the NAS client 215 needs file level access, it implements a FLAP such as System Message Block ("SMB"), Common Internet File System ("CIFS"), a variant of SMB, and Network File System ("NFS"). These protocols are well defined and in common use for accessing network-attached storage in computer networks such as the one employed in the present invention. Client computer 215's operating system 420 makes the FLAP storage-related requests using SMB/CIFS or NFS. These protocols are implemented with the TCP/IP protocol, and thus can be transmitted to storage server 240. Storage server 240 implements one or more of the FLAPs as a server, i.e., it handles requests from clients 215 that make requests. Requests for files and or folder information are sent back to the client 215 using the FLAP (e.g., SMB/CIFS or NFS) that initiated the request. The storage server manages a file system, which is a system of managing computer files and folders for use by computer programs. The file system holds the files and folders shared through the FLAP. The file system typically is stored and maintained on remote disk storage 130, 140, 150, such as those connected to storage server 240. The file service stores and retrieves data for the file system using storage server 240. Because the storage server uses a device-level protocol to store and maintain its file system, the SAN's features are transparent. NAS client computers 215 that use FLAP do not know that the files and folders are stored on SAN devices. This is a further level of virtualization of the present invention. FIGURES 5 and 6 are flowcharts that illustrate a preferred embodiment of the present invention client and storage server operations. FIGURE 5 describes the client's operations with respect to how the configuration of FIGURE 4, which includes SCSI device driver 435a and SAN SCSI driver 440, would operate. (IDE requests could also be made using an IDE device driver and a SAN IDE driver.) In step 510, a SCSI request block (SRB) is received by SAN SCSI driver 440 from SCSI device driver 435a. In step 515, SAN SCSI driver 440 extracts the SCSI storage-related command from the SRB. In step 520, SAN SCSI driver 440 retrieves from the SRB SCSI target information (SCSI ID/LUN) and transforms it into a virtual device identifier. In step 525, SAN SCSI driver 440 analyzes the SCSI storage-related command to determine if it is to be completed locally or remotely. A locally-completed command include a command such as an "Inquiry" or "Report LUN" command, using SCSI commands as an example. If the storage-related command is to be completed locally, in step 530 SAN SCSI driver 440 produces an appropriate response to the SCSI storage-related command, without use of storage server 240. Step 595 then returns to operating system 420.
If the storage-related command is to be completed remotely, SAN SCSI driver 440 converts the storage-related command into a generic storage-related request, as follows. Step 535 converts the storage-related command, in this case a SCSI command, to a generic storage-related command. Step 540 asks whether the generic command is a "Read" or "Write"— type (i.e., device) command. If so, step 545 retrieves the sector address and sector count from the SRB to prepare for transmission to storage server 240 via an RPC. If the generic storage- related command is not a Read or Write-type command, or once the sector address and count are retrieved, step 550 converts the generic storage-related command to a generic storage-related request by putting the virtual device identifier together with the generic storage-related
- ?? command. Generic non-Read/Write commands might include "Ready," "Identify," and "Capacity," which, if ultimately executed on a SCSI device, might translate as SCSI storage- related commands to "Test Unit Ready," "Get Device Information," and "Get Capacity," or, in some cases, may result in the execution of multiple SCSI commands on the SCSI device. A generic storage-related request is used because the present invention is designed to support device protocols other than just SCSI (e.g., IDE, FC, ESCON). A generic storage-related request derived from a device protocol other than SCSI is generated in a manner analogous to that illustrated in FIGURE 5, and the resulting request generated in step 550 would have the same format regardless of the initial protocol used.
Once a generic storage-related request is generated, it is sent to SAN/IP driver 445a. In step 555, SAN/IP driver 445a asks whether the client is verified and authenticated and connected to storage server 240. If not, step 560 verifies and authenticates the client and connects to storage server 240. Once the client is verified, authenticated, and connected, step 565 enhances the reliability of the UDP protocol and transmits the generic storage-related request via RPC to storage server 240. After storage server 240 acts on the request as described in the flowchart of FIGURE 6, the completed results and/or responses are received by SAN/IP driver 445a from storage server 240 in step 570. The response is sent back to SAN SCSI driver 440 which, in step 575, translates or formats appropriately the received results/responses, and returns the response to operating system 420 in step 595.
FIGURE 6 describes the storage server's operations. In step 610, SAN/IP driver 445b in storage server 240 receives the generic storage-related request. In step 615, SAN/IP driver 445b attempts to verify and authenticate the request. In step 620, I/O core virtualization layer 490 takes over and asks if the verification succeeded. If so, step 625 converts the generic storage-related request to a generic storage-related command. Step 630 then asks whether the generic storage-related command requires access to a device for completion. The answer to this is yes if the generic storage-related command is one such as a Read or a Write. If the generic storage-related command is one such as Ready, Identify, or Capacity, which does not need device access to complete, step 635 produces the appropriate results and/or responses and packages the results into a generic storage-related reply. A generic storage-related reply is also generated if the verification/authentication in step 620 failed. Step 695 then transmits this generic storage-related reply back to the client that sent the request.
If the answer to step 630 is yes, a Read or Write- type command has been issued. Step 640 asks whether the generic storage-related command is directed to a SCSI, IDE, or other target device. If SCSI, step 645 converts the generic storage-related command to appropriate physical SCSI command equivalents. Likewise, if IDE, step 650 converts the generic storage- related command to appropriate physical IDE command equivalents. Similarly, if the generic storage-related command is directed to a target device operating under another protocol, step 655 converts the generic storage-related command to appropriate physical command equivalents for that other protocol. Step 660 then executes the commands on the target devices, and step 665 asks whether all of the necessary commands have been executed. If not, the flowchart returns to step 640 to again determine which kind of target device the commands are directed to (which may be different from the previously executed command). Once all the necessary commands have been executed, step 670 packages the results and/or responses into a generic storage-related reply which step 695 then transmits back to the client that sent the request.
FIGURES 7 and 8 illustrate the virtualization capabilities of the present invention. In FIG. 7A, three physical disks 710, e.g., 20 GB each, are mapped to a 60 GB virtual disk 720. Alternatively, in FIG. 7B, a single physical disk 730, e.g., 80 GB, is mapped to two 40 GB virtual disks 740.
FIGURE 8 illustrates several layers of disk virtualization on the client side and the server side. On the client side are two logical disks 810, 820, both starting at logical sector address 0. On the server side is a single virtual device 830. In FIGURE 8, virtual device 830 is shown as a disk, however, virtual device 830 (as well as logical disks 810, 820) are not limited to disks, but can be another device type of device, such as a scanner, a tape drive, a CD-ROM drive, or an optical drive, which is capable of being virtualized. In this example, one virtual device 830 is made up of two physical disks 840, 850. Each physical disk 840, 850 includes a partition 842, 852, respectively, which defines the mapping of the respective disk. In this example, physical disk 840 includes two virtual disks, 844 and 846, whereas physical disk 850 includes a used area 854 and a virtual disk 856. Logical disk 810 is mapped to virtual disk 844 in physical disk 840, and its logical sector address 0 is mapped starting at physical sector address N, located below partition 842. Logical disk 820 is mapped to two virtual disks, 846, 856, each of which resides on different physical disks, 840, 850, respectively. Because of the presence of partition 842 and virtual disk 844, logical sector address 0 of logical disk 820 is mapped starting at physical sector address M of physical disk 840.
For the two logical disk situations 810, 820, the client operation is the same. In both cases, the client receives a SCSI storage-related command from the operating system (steps 510-515). The SCSI storage-related command is converted to a generic storage-related request (steps 535-550) and sent over the high-speed network to storage server 240 (step 565). The generic storage-related request is received by storage server 240 (step 610). For the single-drive operation using logical disk 810, the generic storage-related request is transformed into a command for the server's SCSI drive, but at a sector address different from that specified in the original SCSI storage-related command in the client. That server command is then executed on that SCSI device. For the dual-drive operation using logical disk 820, the storage server first determines that the SCSI storage-related command is requesting an storage-related operation that requires two drives to complete. The generic storage-related request is transformed into two commands, one for each of the server's physical SCSI drives, and both at sector addresses different from that specified in the original SCSI storage-related command in the client. The commands are then executed on those SCSI devices.
The virtualized resource also works for NAS applications. Incorporating NAS client 215 access in the virtualized storage design offers file system access to NAS clients 215 over the Gigabit Ethernet infrastructure without impacting the user LAN bandwidth. In addition, virtualization, high-speed backup snapshot, mirroring, and replication are also available to NAS clients using the existing production-user LAN.
Several of these features are illustrated schematically in FIG. 9. The SAN in FIG. 9 is similar to the SANs depicted in FIGS. 2A and 2B. Standalone storage server 240 is connected to high-speed switch 270. A group of disk drives 130 is connected to standalone storage server 240 via storage interface 260. Differing from FIGS. 2A and 2B, FIG. 9 also includes, in addition to standalone storage server 240, high availability group 910 connected to high-speed switch 270. High availability group 910 consists of two storage servers 240a, 240b connected together, each identical to standalone storage server 240. Attached to high availability group 910 via storage interlace 260 are disk library 140 and several disk drives 130. Two of these disk drives 130 comprise mirrored pair 920, and the other disk drive 130 is included as part of replication group 930 with one of the disk drives 130 daisy-chained to standalone storage server 240.
Mirroring occurs when two drives are written to at the same time. This is illustrated in FIG. 9 using mirrored pair 920. Mirroring arrow 925 indicates that each file or block that is saved to one disk drive 130 of the pair is also saved to the other disk drive 130 of the pair, thus providing redundancy of data in the event one of the disk drives fails.
The result of replication, redundant disk drives, is the same as that of mirroring, but is accomplished in a different way. Instead of saving each file or block to two drives at the same time, the contents of one disk drive 130 are periodically copied to the other disk drive 130. This is illustrated in FIG. 9 using replication group 930. Replication arrow 935 indicates that at certain times, either scheduled or on demand, the contents of the disk drive 130 (in replication group 930) that is daisy-chained to standalone server 240 are copied to the disk drive 130 (in replication group 930) that is daisy-chained to high availability group 910.
Another feature of the invention is SAN-client-free backup, indicated in FIG. 9 by arrow 945. This feature allows the data from one of the disk drives 130 of mirrored pair 920, for example, to be backed up to disk library 140 attached to high availability group 910 without using the resources of client computers 210. Storage servers 240a, 240b control the backup, thus allowing client computers 210 to take care of other tasks for their clients.
The present invention maximizes and exploits the capabilities of data communication protocols such as IP, TCP, and UDP to create an efficient and effective methodology for moving storage over Ethernet. Using hardware-independent architecture, the present invention accommodates a variety of storage device interfaces, including SCSI, FC, IDE, and others, and is able to connect segregated islands of existing FC SANs using IP. By leveraging existing industry standards such as FC, Gigabit Ethernet, and other high-speed network infrastructures, the invention breaks the traditional boundaries of device and interface protocol and delivers the expected benefits of SAN and NAS, plus additional features such as enterprise-wide storage virtualization, replication, mirroring, snapshot, backup acceleration, high-availability servers, end-to-end diagnostics, and reporting. The invention can also plug security holes by supporting encryption (e.g., using hardware virtual private network ("VPN") boxes) and providing key-based authentication, thereby eliminating the possibilities of snooping and spoofing.
It should be understood by those skilled in the art that the present description is provided only by way of illustrative example and should in no manner be construed to limit the invention as described herein. Numerous modifications and alternate embodiments of the invention will occur to those skilled in the art. Accordingly, it is intended that the invention be limited only in terms of the following claims.

Claims

CLAIMS We claim:
1. A system for storing and/or retrieving data, comprising: a client computer having data to store or desiring to retrieve data; a storage server in communication with the client computer and capable of reading storage-related requests communicated from the client computer; a high-speed network running at least one data communication protocol for communicating between the client computer and the storage server; a storage device having data to retrieve and for storing data, wherein the storage device is in communication with the storage server; and a storage manager in communication with the storage server for allocating and authorizing the storage device.
2. The system according to claim 1, wherein the data communication protocol is selected from the group consisting of Internet Protocol ("IP"), Transmission Control Protocol ("TCP"), and User Datagram Protocol ("UDP").
3. The system according to claim 1, further comprising a high-speed switch for communicating between the client computer and the storage server.
4. The system according to claim 3, wherein the high-speed switch and the storage server are integrated.
5. The system according to claim 1, wherein the storage device is selected from the group consisting of SCSI, Fibre Channel, IDE, and ramdisk devices.
6. The system according to claim 1, further comprising at least one other storage device, wherein storing the data comprises mirroring the data of one of the storage devices onto the other storage device.
7. The system according to claim 1, further comprising at least one other storage device, wherein storing the data comprises replicating the data of one of the storage devices onto the other storage device.
8. The system according to claim 1, further comprising at least one other storage server for providing high availability data storage.
9. The system according to claim 1, further comprising at least a second storage device, wherein the data on the storage device is backed up to the second storage device without involving the use of the client computer.
10. The system according to claim 1, wherein the client computer is capable of operating as both a storage area network ("SAN") client and a network-attached storage ("NAS")
client.
11. The system according to claim 1, wherein the client computer further comprises a driver for determining whether a storage-related command is to be communicated to the storage server.
12. The system according to claim 11, wherein the data communication protocol is User Datagram Protocol ("UDP") running over Internet Protocol ("IP").
13. The system according to claim 12, wherein the driver enhances the reliability of UDP.
14. The system according to claim 11, wherein the storage-related command is not communicated to the storage server, but instead is responded to by the driver, and wherein it appears to the client computer that the response to the storage-related command is coming from a storage device connected directly to the client computer.
15. The system according to claim 11, wherein the storage- related command is communicated to the storage server in the form of a generic storage-related request.
16. The system according to claim 15, wherein the generic storage-related request is device protocol-independent.
17. The system according to claim 15, wherein the generic storage-related request is not communicated to the storage device, but instead is responded to by the storage server in the
i l - form of a generic storage-related reply, and wherein it appears to the client computer that the response to the storage-related command is coming from a storage device connected directly to the client computer.
18. The system according to claim 15, wherein the storage device is virtualized.
19. The system according to claim 18, wherein the generic storage-related request is communicated to the storage device.
20. The system according to claim 19, wherein the generic storage-related request is converted to a second storage-related command capable of being executed on the storage device, and the storage device responds to the second storage-related command and communicates the response to the storage server.
21. The system according to claim 20, wherein the storage server converts the response to the second storage-related command into a generic storage-related reply and communicates the generic storage-related reply to the client computer, and wherein it appears to the client computer that the storage device is connected directly to the client computer.
22. The system according to claim 18, wherein the storage device appears as a single virtual device.
23. The system according to claim 22, wherein the single virtual device comprises at least one physical device.
24. The system according to claim 23, wherein the physical device comprises at least one virtual device.
25. A method for storing and/or retrieving data, comprising: communicating over a high-speed network at least one storage-related request from a client computer to a storage server, the network running at least one data communication protocol; allocating and authorizing a storage device in communication with the storage server; and storing data on and/or retrieving data from the storage device.
26. The method according to claim 25, wherein the data communication protocol is selected from the group consisting of Internet Protocol ("IP"), Transmission Control Protocol ("TCP"), and User Datagram Protocol ("UDP").
27. The method according to claim 25, wherein the storage device is selected from the group consisting of SCSI, Fibre Channel, IDE, and ramdisk devices.
28. The method according to claim 25, wherein the client computer is capable of operating as both a storage area network ("SAN") client and a network-attached storage ("NAS") client.
29. The method according to claim 25, wherein the data communication protocol is User Datagram Protocol ("UDP") running over Internet Protocol ("IP").
30. The method according to claim 29, further comprising enhancing the reliability of UDP.
31. The method according to claim 25, further comprising virtualizing the communication between the client computer and the storage device.
32. The method according to claim 31, wherein the virtualizing comprises not communicating to the storage server a storage-related command, but instead responding to the storage-related command so that it appears to the client computer that the response to the storage-related command is coming from a storage device connected directly to the client computer.
33. The method according to claim 31, wherein the virtualizing comprises converting the storage-related command to a generic storage-related request, wherein the generic storage- related request is device protocol-independent.
34. The method according to claim 33, further comprising not communicating the generic storage-related request to the storage device, but instead responding to the generic storage-related request so that it appears to the client computer that the response to the storage- related command is coming from a storage device connected directly to the client computer.
35. The method according to claim 34, further comprising converting the generic storage-related request to a second storage-related command capable of being executed on the storage device, the storage device responding to the second storage-related command and communicating the response to the storage server.
36. The method according to claim 35, further comprising converting the second storage-related command into a generic storage-related reply and communicating the generic storage-related reply to the client computer so that it appears to the client computer that the storage device is connected directly to the client computer.
37. The method according to claim 31, wherein the virtualizing comprises making the storage device appear as a single virtual device.
38. The method according to claim 37, wherein the single virtual device comprises at least one physical device.
39. The method according to claim 38, wherein the physical device comprises at least one virtual device.
40. A system for accessing a peripheral device, comprising: a client computer; a server in communication with the client computer and capable of reading peripheral device-related requests communicated from the client computer; a high-speed network running at least one data communication protocol for communicating between the client computer and the server; and a manager in communication with the server for allocating and authorizing the peripheral device, wherein the peripheral device is remote from the client computer and in communication with the server.
41. The system according to claim 40, wherein the peripheral device is selected from the group consisting of SCSI, Fibre Channel, and IDE devices.
42. The system according to claim 40, wherein the communication between the client computer and the peripheral device is virtualized.
43. The system according to claim 42, wherein it appears to the client computer that the peripheral device is connected directly to the client computer.
44. The system according to claim 42. wherein the peripheral device appears as a single virtual device.
56 -
45. The system according to claim 44, wherein the single virtual device comprises at least one physical device.
46. The system according to claim 45, wherein the physical device comprises at least one virtual device.
57 -
PCT/US2002/008001 2001-02-23 2002-02-21 Storage area network using a data communication protocol WO2002069159A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/792,873 2001-02-23
US09/792,873 US20040233910A1 (en) 2001-02-23 2001-02-23 Storage area network using a data communication protocol

Publications (1)

Publication Number Publication Date
WO2002069159A1 true WO2002069159A1 (en) 2002-09-06

Family

ID=25158331

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/008001 WO2002069159A1 (en) 2001-02-23 2002-02-21 Storage area network using a data communication protocol

Country Status (2)

Country Link
US (1) US20040233910A1 (en)
WO (1) WO2002069159A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004095287A2 (en) * 2003-04-04 2004-11-04 Bluearc Uk Limited Network-attached storage system, device, and method supporting multiple storage device types
WO2005066830A1 (en) * 2004-01-08 2005-07-21 Agency For Science, Technology & Research A shared storage network system and a method for operating a shared storage network system
WO2005112397A1 (en) * 2004-04-30 2005-11-24 Network Appliance, Inc. System and method for configuring a storage network utilizing a multi-protocol storage appliance
WO2006012342A2 (en) * 2004-06-30 2006-02-02 Intel Corporation Multi-protocol bridge
US7330950B2 (en) 2003-03-27 2008-02-12 Hitachi, Ltd. Storage device
US7430629B2 (en) 2005-05-12 2008-09-30 International Business Machines Corporation Internet SCSI communication via UNDI services
US7624134B2 (en) 2006-06-12 2009-11-24 International Business Machines Corporation Enabling access to remote storage for use with a backup program
US7631002B2 (en) 2002-05-23 2009-12-08 Hitachi, Ltd. Storage device management method, system and program
US7987154B2 (en) 2004-08-12 2011-07-26 Telecom Italia S.P.A. System, a method and a device for updating a data set through a communication network
US9542310B2 (en) 2002-11-01 2017-01-10 Hitachi Data Systems Engineering UK Limited File server node with non-volatile memory processing module coupled to cluster file server node
US20220067693A1 (en) * 2020-08-26 2022-03-03 Mastercard International Incorporated Systems and methods for distributing data

Families Citing this family (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7203732B2 (en) * 1999-11-11 2007-04-10 Miralink Corporation Flexible remote data mirroring
TW454120B (en) 1999-11-11 2001-09-11 Miralink Corp Flexible remote data mirroring
US6868417B2 (en) * 2000-12-18 2005-03-15 Spinnaker Networks, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US7239642B1 (en) * 2001-07-16 2007-07-03 Network Appliance, Inc. Multi-protocol network interface card
US7185062B2 (en) * 2001-09-28 2007-02-27 Emc Corporation Switch-based storage services
US20070094466A1 (en) * 2001-12-26 2007-04-26 Cisco Technology, Inc., A Corporation Of California Techniques for improving mirroring operations implemented in storage area networks and network based virtualization
US9009427B2 (en) 2001-12-26 2015-04-14 Cisco Technology, Inc. Mirroring mechanisms for storage area networks and network based virtualization
US20090259817A1 (en) * 2001-12-26 2009-10-15 Cisco Technology, Inc. Mirror Consistency Checking Techniques For Storage Area Networks And Network Based Virtualization
US20070094465A1 (en) * 2001-12-26 2007-04-26 Cisco Technology, Inc., A Corporation Of California Mirroring mechanisms for storage area networks and network based virtualization
US7149808B2 (en) * 2002-01-14 2006-12-12 Array Networks, Inc. Application protocol offloading
JP3964212B2 (en) * 2002-01-16 2007-08-22 株式会社日立製作所 Storage system
US20040025166A1 (en) * 2002-02-02 2004-02-05 International Business Machines Corporation Server computer and a method for accessing resources from virtual machines of a server computer via a fibre channel
US20030200247A1 (en) * 2002-02-02 2003-10-23 International Business Machines Corporation Server computer and a method for accessing resources from virtual machines of a server computer via a fibre channel
JP2003241903A (en) * 2002-02-14 2003-08-29 Hitachi Ltd Storage control device, storage system and control method thereof
JP4146653B2 (en) * 2002-02-28 2008-09-10 株式会社日立製作所 Storage device
US7421478B1 (en) 2002-03-07 2008-09-02 Cisco Technology, Inc. Method and apparatus for exchanging heartbeat messages and configuration information between nodes operating in a master-slave configuration
US7281062B1 (en) 2002-04-22 2007-10-09 Cisco Technology, Inc. Virtual SCSI bus for SCSI-based storage area network
US7165258B1 (en) * 2002-04-22 2007-01-16 Cisco Technology, Inc. SCSI-based storage area network having a SCSI router that routes traffic between SCSI and IP networks
US7415535B1 (en) 2002-04-22 2008-08-19 Cisco Technology, Inc. Virtual MAC address system and method
US7240098B1 (en) 2002-05-09 2007-07-03 Cisco Technology, Inc. System, method, and software for a virtual host bus adapter in a storage-area network
JP2003330762A (en) * 2002-05-09 2003-11-21 Hitachi Ltd Control method for storage system, storage system, switch and program
US7120837B1 (en) 2002-05-09 2006-10-10 Cisco Technology, Inc. System and method for delayed error handling
US7509436B1 (en) 2002-05-09 2009-03-24 Cisco Technology, Inc. System and method for increased virtual driver throughput
US7493404B2 (en) * 2002-05-30 2009-02-17 Lsi Corporation Apparatus and method for providing transparent sharing of channel resources by multiple host machines utilizing mixed mode block and file protocols
JP4382328B2 (en) * 2002-06-11 2009-12-09 株式会社日立製作所 Secure storage system
US7873700B2 (en) * 2002-08-09 2011-01-18 Netapp, Inc. Multi-protocol storage appliance that provides integrated support for file and block access protocols
US7711539B1 (en) * 2002-08-12 2010-05-04 Netapp, Inc. System and method for emulating SCSI reservations using network file access protocols
US7181578B1 (en) * 2002-09-12 2007-02-20 Copan Systems, Inc. Method and apparatus for efficient scalable storage management
US7475124B2 (en) * 2002-09-25 2009-01-06 Emc Corporation Network block services for client access of network-attached data storage in an IP network
US20040093607A1 (en) * 2002-10-29 2004-05-13 Elliott Stephen J System providing operating system independent access to data storage devices
KR100680626B1 (en) * 2002-12-20 2007-02-09 인터내셔널 비지네스 머신즈 코포레이션 Secure system and method for san management in a non-trusted server environment
JP2004227097A (en) * 2003-01-20 2004-08-12 Hitachi Ltd Control method of storage device controller, and storage device controller
US7804852B1 (en) 2003-01-24 2010-09-28 Douglas Durham Systems and methods for definition and use of a common time base in multi-protocol environments
US7844690B1 (en) 2003-01-24 2010-11-30 Douglas Durham Systems and methods for creation and use of a virtual protocol analyzer
US7831736B1 (en) 2003-02-27 2010-11-09 Cisco Technology, Inc. System and method for supporting VLANs in an iSCSI
US7290168B1 (en) * 2003-02-28 2007-10-30 Sun Microsystems, Inc. Systems and methods for providing a multi-path network switch system
US7447939B1 (en) 2003-02-28 2008-11-04 Sun Microsystems, Inc. Systems and methods for performing quiescence in a storage virtualization environment
US7236987B1 (en) 2003-02-28 2007-06-26 Sun Microsystems Inc. Systems and methods for providing a storage virtualization environment
US7383381B1 (en) 2003-02-28 2008-06-03 Sun Microsystems, Inc. Systems and methods for configuring a storage virtualization environment
US7904599B1 (en) 2003-03-28 2011-03-08 Cisco Technology, Inc. Synchronization and auditing of zone configuration data in storage-area networks
US7383378B1 (en) * 2003-04-11 2008-06-03 Network Appliance, Inc. System and method for supporting file and block access to storage object on a storage appliance
JPWO2004095293A1 (en) * 2003-04-24 2006-07-13 三菱電機株式会社 Video information system and module unit
US20040221123A1 (en) * 2003-05-02 2004-11-04 Lam Wai Tung Virtual data switch and method of use
US7610348B2 (en) 2003-05-07 2009-10-27 International Business Machines Distributed file serving architecture system with metadata storage virtualization and data access at the data server connection speed
US7680957B1 (en) * 2003-05-09 2010-03-16 Symantec Operating Corporation Computer system configuration representation and transfer
JP4278445B2 (en) * 2003-06-18 2009-06-17 株式会社日立製作所 Network system and switch
US7165145B2 (en) * 2003-07-02 2007-01-16 Falconstor Software, Inc. System and method to protect data stored in a storage system
US20070168046A1 (en) * 2003-08-04 2007-07-19 Mitsubishi Denki Kabushiki Kaisha Image information apparatus and module unit
DE10345016A1 (en) * 2003-09-23 2005-04-21 Deutsche Telekom Ag Method and communication system for managing and providing data
US20050071560A1 (en) * 2003-09-30 2005-03-31 International Business Machines Corp. Autonomic block-level hierarchical storage management for storage networks
US7693960B1 (en) * 2003-10-22 2010-04-06 Sprint Communications Company L.P. Asynchronous data storage system with geographic diversity
US20050198401A1 (en) * 2004-01-29 2005-09-08 Chron Edward G. Efficiently virtualizing multiple network attached stores
US8230085B2 (en) * 2004-04-12 2012-07-24 Netapp, Inc. System and method for supporting block-based protocols on a virtual storage appliance executing within a physical storage appliance
US8019842B1 (en) * 2005-01-27 2011-09-13 Netapp, Inc. System and method for distributing enclosure services data to coordinate shared storage
US8180855B2 (en) 2005-01-27 2012-05-15 Netapp, Inc. Coordinated shared storage architecture
US7535917B1 (en) * 2005-02-22 2009-05-19 Netapp, Inc. Multi-protocol network adapter
US7568056B2 (en) * 2005-03-28 2009-07-28 Nvidia Corporation Host bus adapter that interfaces with host computer bus to multiple types of storage devices
US7761649B2 (en) * 2005-06-02 2010-07-20 Seagate Technology Llc Storage system with synchronized processing elements
US20060277268A1 (en) * 2005-06-02 2006-12-07 Everhart Craig F Access method for file systems
US7529816B2 (en) * 2005-06-03 2009-05-05 Hewlett-Packard Development Company, L.P. System for providing multi-path input/output in a clustered data storage network
US20080052525A1 (en) * 2006-08-28 2008-02-28 Tableau, Llc Password recovery
US20080126472A1 (en) * 2006-08-28 2008-05-29 Tableau, Llc Computer communication
US20080052429A1 (en) * 2006-08-28 2008-02-28 Tableau, Llc Off-board computational resources
US20080052490A1 (en) * 2006-08-28 2008-02-28 Tableau, Llc Computational resource array
US20080140724A1 (en) 2006-12-06 2008-06-12 David Flynn Apparatus, system, and method for servicing object requests within a storage controller
DE602008001128D1 (en) * 2007-05-10 2010-06-17 Research In Motion Ltd System and method for managing devices
US8046509B2 (en) * 2007-07-06 2011-10-25 Prostor Systems, Inc. Commonality factoring for removable media
US8650615B2 (en) * 2007-09-28 2014-02-11 Emc Corporation Cross domain delegation by a storage virtualization system
US7836226B2 (en) 2007-12-06 2010-11-16 Fusion-Io, Inc. Apparatus, system, and method for coordinating storage requests in a multi-processor/multi-thread environment
JP4547440B2 (en) * 2008-03-31 2010-09-22 富士通株式会社 Virtual tape system
US7882202B2 (en) * 2008-04-01 2011-02-01 International Business Machines Corporation System to delegate virtual storage access method related file operations to a storage server using an in-band RPC mechanism
US20100199196A1 (en) * 2008-09-04 2010-08-05 Thompson Aerospace Method for delivering graphic intensive web type content to thin clients
US20100131661A1 (en) * 2008-11-26 2010-05-27 Inventec Coprporation Fiber channel storage server
US9953178B2 (en) * 2010-02-03 2018-04-24 Os Nexus, Inc. Role based access control utilizing scoped permissions
JP2014522011A (en) * 2011-05-17 2014-08-28 データボード・インコーポレーテツド Providing access to mainframe data objects in a heterogeneous computing environment
US9063938B2 (en) 2012-03-30 2015-06-23 Commvault Systems, Inc. Search filtered file system using secondary storage, including multi-dimensional indexing and searching of archived files
US9639297B2 (en) 2012-03-30 2017-05-02 Commvault Systems, Inc Shared network-available storage that permits concurrent data access
US10552385B2 (en) 2012-05-20 2020-02-04 Microsoft Technology Licensing, Llc System and methods for implementing a server-based hierarchical mass storage system
US9208168B2 (en) 2012-11-19 2015-12-08 Netapp, Inc. Inter-protocol copy offload
JP6241178B2 (en) * 2013-09-27 2017-12-06 富士通株式会社 Storage control device, storage control method, and storage control program
US10635316B2 (en) 2014-03-08 2020-04-28 Diamanti, Inc. Methods and systems for data storage using solid state drives
US10628353B2 (en) 2014-03-08 2020-04-21 Diamanti, Inc. Enabling use of non-volatile media-express (NVMe) over a network
US11921658B2 (en) 2014-03-08 2024-03-05 Diamanti, Inc. Enabling use of non-volatile media-express (NVMe) over a network
KR20160138448A (en) * 2014-03-08 2016-12-05 다이아만티 인코포레이티드 Methods and systems for converged networking and storage
US10152266B1 (en) * 2014-07-28 2018-12-11 Veritas Technologies Llc Systems and methods for providing data backup services in a virtual environment
US9819732B2 (en) * 2015-07-31 2017-11-14 Netapp, Inc. Methods for centralized management API service across disparate storage platforms and devices thereof
CN107534645B (en) 2015-08-12 2020-11-20 慧与发展有限责任合伙企业 Storage system, non-transitory machine readable medium, and method for host storage authentication
US10075524B1 (en) * 2015-09-29 2018-09-11 Amazon Technologies, Inc. Storage bridge device for communicating with network storage
US9854060B2 (en) 2015-09-29 2017-12-26 Netapp, Inc. Methods and systems for monitoring network storage system resources by an API server
US10162793B1 (en) 2015-09-29 2018-12-25 Amazon Technologies, Inc. Storage adapter device for communicating with network storage
US9798891B2 (en) 2015-10-13 2017-10-24 Netapp, Inc. Methods and systems for service level objective API for storage management
US10831710B2 (en) * 2016-08-03 2020-11-10 Dell Products L.P. Method and system for implementing namespace aggregation by single redirection of folders for NFS and SMB protocols
CN106776430A (en) * 2016-12-12 2017-05-31 英业达科技有限公司 Server system
US10620835B2 (en) * 2017-01-27 2020-04-14 Wyse Technology L.L.C. Attaching a windows file system to a remote non-windows disk stack
US11811674B2 (en) * 2018-10-20 2023-11-07 Netapp, Inc. Lock reservations for shared storage

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5566331A (en) * 1994-01-24 1996-10-15 University Corporation For Atmospheric Research Mass storage system for file-systems

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5287537A (en) * 1985-11-15 1994-02-15 Data General Corporation Distributed processing system having plural computers each using identical retaining information to identify another computer for executing a received command
US5109515A (en) * 1987-09-28 1992-04-28 At&T Bell Laboratories User and application program transparent resource sharing multiple computer interface architecture with kernel process level transfer of user requested services
US5206946A (en) * 1989-10-27 1993-04-27 Sand Technology Systems Development, Inc. Apparatus using converters, multiplexer and two latches to convert SCSI data into serial data and vice versa
US5388243A (en) * 1990-03-09 1995-02-07 Mti Technology Corporation Multi-sort mass storage device announcing its active paths without deactivating its ports in a network architecture
US5151987A (en) * 1990-10-23 1992-09-29 International Business Machines Corporation Recovery objects in an object oriented computing environment
US5274783A (en) * 1991-06-28 1993-12-28 Digital Equipment Corporation SCSI interface employing bus extender and auxiliary bus
US5237695A (en) * 1991-11-01 1993-08-17 Hewlett-Packard Company Bus contention resolution method for network devices on a computer network having network segments connected by an interconnection medium over an extended distance
US5333277A (en) * 1992-01-10 1994-07-26 Exportech Trading Company Data buss interface and expansion system
FR2695740B1 (en) * 1992-09-16 1994-11-25 Bull Sa Data transmission system between a computer bus and a very high speed ring-shaped network.
US5491812A (en) * 1992-09-28 1996-02-13 Conner Peripherals, Inc. System and method for ethernet to SCSI conversion
JPH06195322A (en) * 1992-10-29 1994-07-15 Hitachi Ltd Information processor used as general purpose neurocomputer
US5613160A (en) * 1992-11-18 1997-03-18 Canon Kabushiki Kaisha In an interactive network board, method and apparatus for placing a network peripheral in a default configuration
JP2793489B2 (en) * 1993-01-13 1998-09-03 インターナショナル・ビジネス・マシーンズ・コーポレイション Common data link interface
US5325527A (en) * 1993-01-19 1994-06-28 Canon Information Systems, Inc. Client/server communication system utilizing a self-generating nodal network
US5528765A (en) * 1993-03-15 1996-06-18 R. C. Baker & Associates Ltd. SCSI bus extension system for controlling individual arbitration on interlinked SCSI bus segments
US5574862A (en) * 1993-04-14 1996-11-12 Radius Inc. Multiprocessing system with distributed input/output management
US5463772A (en) * 1993-04-23 1995-10-31 Hewlett-Packard Company Transparent peripheral file systems with on-board compression, decompression, and space management
JP3264465B2 (en) * 1993-06-30 2002-03-11 株式会社日立製作所 Storage system
US5548783A (en) * 1993-10-28 1996-08-20 Dell Usa, L.P. Composite drive controller including composite disk driver for supporting composite drive accesses and a pass-through driver for supporting accesses to stand-alone SCSI peripherals
US5574861A (en) * 1993-12-21 1996-11-12 Lorvig; Don Dynamic allocation of B-channels in ISDN
US5471634A (en) * 1994-03-29 1995-11-28 The United States Of America As Represented By The Secretary Of The Navy Network file server with automatic sensing means
US5596723A (en) * 1994-06-23 1997-01-21 Dell Usa, Lp Method and apparatus for automatically detecting the available network services in a network system
US5504757A (en) * 1994-09-27 1996-04-02 International Business Machines Corporation Method for selecting transmission speeds for transmitting data packets over a serial bus
EP0713183A3 (en) * 1994-11-18 1996-10-02 Microsoft Corp Network independent file shadowing
US5640541A (en) * 1995-03-24 1997-06-17 Openconnect Systems, Inc. Adapter for interfacing a SCSI bus with an IBM system/360/370 I/O interface channel and information system including same
US5664221A (en) * 1995-11-14 1997-09-02 Digital Equipment Corporation System for reconfiguring addresses of SCSI devices via a device address bus independent of the SCSI bus
US5787019A (en) * 1996-05-10 1998-07-28 Apple Computer, Inc. System and method for handling dynamic changes in device states
US5923850A (en) * 1996-06-28 1999-07-13 Sun Microsystems, Inc. Historical asset information data storage schema
US5892955A (en) * 1996-09-20 1999-04-06 Emc Corporation Control of a multi-user disk storage system
US6178173B1 (en) * 1996-12-30 2001-01-23 Paradyne Corporation System and method for communicating pre-connect information in a digital communication system
US5925119A (en) * 1997-03-28 1999-07-20 Quantum Corporation Computer architecture for automated storage library
US6003065A (en) * 1997-04-24 1999-12-14 Sun Microsystems, Inc. Method and system for distributed processing of applications on host and peripheral devices
US5991813A (en) * 1997-05-16 1999-11-23 Icon Cmt Corp. Network enabled SCSI interface
US5941972A (en) * 1997-12-31 1999-08-24 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
US5996024A (en) * 1998-01-14 1999-11-30 Emc Corporation Method and apparatus for a SCSI applications server which extracts SCSI commands and data from message and encapsulates SCSI responses to provide transparent operation
US6041381A (en) * 1998-02-05 2000-03-21 Crossroads Systems, Inc. Fibre channel to SCSI addressing method and system
US6263445B1 (en) * 1998-06-30 2001-07-17 Emc Corporation Method and apparatus for authenticating connections to a storage system coupled to a network
US6400730B1 (en) * 1999-03-10 2002-06-04 Nishan Systems, Inc. Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US6188997B1 (en) * 1999-04-19 2001-02-13 Pitney Bowes Inc. Postage metering system having currency synchronization
US6826613B1 (en) * 2000-03-15 2004-11-30 3Com Corporation Virtually addressing storage devices through a switch
US6898670B2 (en) * 2000-04-18 2005-05-24 Storeage Networking Technologies Storage virtualization in a storage area network
US6985956B2 (en) * 2000-11-02 2006-01-10 Sun Microsystems, Inc. Switching system
US6606690B2 (en) * 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5566331A (en) * 1994-01-24 1996-10-15 University Corporation For Atmospheric Research Mass storage system for file-systems

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7631002B2 (en) 2002-05-23 2009-12-08 Hitachi, Ltd. Storage device management method, system and program
US9753848B2 (en) 2002-11-01 2017-09-05 Hitachi Data Systems Engineering UK Limited Apparatus for managing a plurality of root nodes for file systems
US9542310B2 (en) 2002-11-01 2017-01-10 Hitachi Data Systems Engineering UK Limited File server node with non-volatile memory processing module coupled to cluster file server node
US7330950B2 (en) 2003-03-27 2008-02-12 Hitachi, Ltd. Storage device
US8230194B2 (en) 2003-03-27 2012-07-24 Hitachi, Ltd. Storage device
US7925851B2 (en) 2003-03-27 2011-04-12 Hitachi, Ltd. Storage device
US7356660B2 (en) 2003-03-27 2008-04-08 Hitachi, Ltd. Storage device
US7237021B2 (en) 2003-04-04 2007-06-26 Bluearc Uk Limited Network-attached storage system, device, and method supporting multiple storage device types
WO2004095287A3 (en) * 2003-04-04 2005-06-02 Bluearc Uk Ltd Network-attached storage system, device, and method supporting multiple storage device types
US7509409B2 (en) 2003-04-04 2009-03-24 Bluearc Uk Limited Network-attached storage system, device, and method with multiple storage tiers
WO2004095287A2 (en) * 2003-04-04 2004-11-04 Bluearc Uk Limited Network-attached storage system, device, and method supporting multiple storage device types
US7797393B2 (en) 2004-01-08 2010-09-14 Agency For Science, Technology And Research Shared storage network system and a method for operating a shared storage network system
WO2005066830A1 (en) * 2004-01-08 2005-07-21 Agency For Science, Technology & Research A shared storage network system and a method for operating a shared storage network system
US8996455B2 (en) 2004-04-30 2015-03-31 Netapp, Inc. System and method for configuring a storage network utilizing a multi-protocol storage appliance
WO2005112397A1 (en) * 2004-04-30 2005-11-24 Network Appliance, Inc. System and method for configuring a storage network utilizing a multi-protocol storage appliance
WO2006012342A2 (en) * 2004-06-30 2006-02-02 Intel Corporation Multi-protocol bridge
WO2006012342A3 (en) * 2004-06-30 2006-05-26 Intel Corp Multi-protocol bridge
US7987154B2 (en) 2004-08-12 2011-07-26 Telecom Italia S.P.A. System, a method and a device for updating a data set through a communication network
US7562175B2 (en) 2005-05-12 2009-07-14 International Business Machines Corporation Internet SCSI communication via UNDI services
US7509449B2 (en) 2005-05-12 2009-03-24 International Business Machines Corporation Internet SCSI communication via UNDI services
US7430629B2 (en) 2005-05-12 2008-09-30 International Business Machines Corporation Internet SCSI communication via UNDI services
US7624134B2 (en) 2006-06-12 2009-11-24 International Business Machines Corporation Enabling access to remote storage for use with a backup program
US20220067693A1 (en) * 2020-08-26 2022-03-03 Mastercard International Incorporated Systems and methods for distributing data
US11699141B2 (en) * 2020-08-26 2023-07-11 Mastercard International Incorporated Systems and methods for distributing data

Also Published As

Publication number Publication date
US20040233910A1 (en) 2004-11-25

Similar Documents

Publication Publication Date Title
US20040233910A1 (en) Storage area network using a data communication protocol
JP4758424B2 (en) System and method capable of utilizing a block-based protocol in a virtual storage appliance running within a physical storage appliance
JP5026283B2 (en) Collaborative shared storage architecture
RU2302034C2 (en) Multi-protocol data storage device realizing integrated support of file access and block access protocols
JP4667707B2 (en) Method to mediate communication with movable media library using multiple partitions
US7315914B1 (en) Systems and methods for managing virtualized logical units using vendor specific storage array commands
US7055056B2 (en) System and method for ensuring the availability of a storage system
JP4252301B2 (en) Storage system and data backup method thereof
US7275050B2 (en) Storage system, a method of file data backup and method of copying of file data
US9311001B1 (en) System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US7206863B1 (en) System and method for managing storage networks and providing virtualization of resources in such a network
US8205043B2 (en) Single nodename cluster system for fibre channel
KR100995466B1 (en) Methods and apparatus for implementing virtualization of storage within a storage area network
US8271606B2 (en) Network-based storage system capable of allocating storage partitions to hosts
EP1908261B1 (en) Client failure fencing mechanism for fencing network file system data in a host-cluster environment
US7519769B1 (en) Scalable storage network virtualization
JP4310070B2 (en) Storage system operation management method
US8219681B1 (en) System and method for managing provisioning of storage resources in a network with virtualization of resources in such a network
US20100080237A1 (en) Fibre channel proxy
US7620774B1 (en) System and method for managing storage networks and providing virtualization of resources in such a network using one or more control path controllers with an embedded ASIC on each controller
WO2002067529A2 (en) System and method for accessing a storage area network as network attached storage
JP2005505038A (en) Storage device switch for storage area network
JPH117404A (en) Network connection scsi device and file system using the device
Sacks Demystifying storage networking das, san, nas, nas gateways, fibre channel, and iscsi
US10798159B2 (en) Methods for managing workload throughput in a storage system and devices thereof

Legal Events

Date Code Title Description
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP