AU2005200529B2 - System and method for managing storage resources in a clustered computing environment - Google Patents

System and method for managing storage resources in a clustered computing environment Download PDF

Info

Publication number
AU2005200529B2
AU2005200529B2 AU2005200529A AU2005200529A AU2005200529B2 AU 2005200529 B2 AU2005200529 B2 AU 2005200529B2 AU 2005200529 A AU2005200529 A AU 2005200529A AU 2005200529 A AU2005200529 A AU 2005200529A AU 2005200529 B2 AU2005200529 B2 AU 2005200529B2
Authority
AU
Australia
Prior art keywords
node
scsi
command
computing environment
clustered computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2005200529A
Other versions
AU2005200529A1 (en
Inventor
Nam V. Nguyen
Ahmad H. Tawil
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dell Products LP
Original Assignee
Dell Products LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/524,401 external-priority patent/US6622163B1/en
Application filed by Dell Products LP filed Critical Dell Products LP
Publication of AU2005200529A1 publication Critical patent/AU2005200529A1/en
Priority to AU2007202999A priority Critical patent/AU2007202999B2/en
Application granted granted Critical
Publication of AU2005200529B2 publication Critical patent/AU2005200529B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F2003/0697Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers device management, e.g. handlers, drivers, I/O schedulers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Description

S&FRef: 544889D1
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Dell Products, of One Dell Way, Round Rock, Texas, 78682-2244, United States of America Ahmad H. Tawil Nam V. Nguyen Spruson Ferguson St Martins Tower Level 31 Market Street Sydney NSW 2000 (CCN 3710000177) System and method for managing storage resources in a clustered computing environment The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845c 00 O1 SYSTEM AND METHOD FOR MANAGING STORAGE RESOURCES IN A CLUSTERED COMPUTING ENVIRONMENT TECHNICAL FIELD OF TRHE DISCIOSURE The present disclosure relates in general to the field of data storage systems and, more particularly, to a system and method for managing storage resources in a clustered computing environment.
00 2 BACKGROUND OF THE. PjSCLOsURE Storage area networks (SANs) of ten include a -collection: of :data 2 .storage resources. communicatively.
coupled to a plurality of nodes such as workstations and S servers. In the-present disclosure, the term "node" and "server" are used interchangeably, with the understanding that, a aservera is one type of Onode".
Within a server- may access -a -data -stobrage resource across a fabric using the Fibre Channel protocol. The Fibre Channel protocol may act as a conmon physical layer that allows for the transportation of multiple upper layer protocols, such as the small computer system interconnect (SCSI) protocol. In a SAN environment, the SCSI protocol may assign logical unit numbers (LtlNs) to the collection of data storage resources. The. LtJNs may allow a server within a SAN to access-specific data storage resources by referencing a SCSI LUN for a specific data storage resource.
.Though a Fibre Channel storage system can offer a great deal of storage capacity, the system can also be very expensive to implement- As a result, users often seek to share the available storage provided by the system among multiple servers. Unfortunately, if a ser ver coupled to a given SAN uses the MICROSOFT WINDOWS NT operating system, the server may attempt to take ownmership of any LUll visible to the server. For example, if a particular server detects several LUNs when the server boots, it may assume each LUll is available for its use- Therefore, if multiple WINDOWS NT servers are attached to a storage, pool or a collection of data 00 3 Oq storage resources, each server may attempt to take pC control of each LUN in the storage pool. This situation .can lead to conflicts when more than one server attempts Sto access the same LUN.
A user seeking to solve this problem may partition Sor zone the available storage through filtering or through the use of miniport drivers that have LUN masking capabilities. In effect this partitioning may prevent a server running WINDOWS NT from seeing storage capacity that is not assigned to it. This approach may be effective for stand-alone servers, but the approach has several shortcomings in a clustered computing environment.
Clustering involves the configuring of a group of independent servers so that they appear on a-network as a single machine. Often, clusters are managed as a single system, share a common namespace, and are designed specifically to tolerate component failures and to support the addition or subtraction of components in a transparent manner. Unfortunately, because a cluster may have two or more servers that appear to be a single machine, the partitioning techniques mentioned above may prove an ineffective solution for avoiding conflicts when the two or more servers attempt to access the same LUN.
MICROSOFT CLUSTER SERVER (MSCS) embodies one currently available technique for arbitrating conflicts and managing ownership of. storage devices in a clustered computing environment. An MSCS system may operate within a cluster that has two servers, server A, which may be in charge, and server B. In operation, server A may pass a 00 C4 oO periodic heartbeat signal to server B to let server B know that server A is "alive". If server B does not receive.a.timely heartbeat Ifrom server A, server B may.
O seek to determine whether server A is operable and/or r 5 whether server B may take ownership of any LUNs reserved 0 for server A. Unfortunately, these MSCS system may utilize SCSI target resets during this process, and the SCSI resets may create several problems. For example, a typical SCSI reset in the MSCS system may cause all servers within.a given Fibre Channel system to abort their pending input/output processes. These aborted I/O processes may eventually be completed but not until the bus settles. This abort/wait/retry approach can have a detrimental effect on overall system performance.
In addition to this potential effect on performance, the MSCS system and its use of SCSI resets may have a detrimental effect on overall system reliability. In operation, the MSCS system may only account for one SCSI reset at a time. The inability to account for subsequent SCSI resets may lead to unexpected behavior and decrease system reliability.
o SUMMARY OF THE DISCLOSURE SIn accordance with the present disclosure, a system and method for managing 00 storage resources in a clustered computing environment are disclosed that provide significant advantages over prior developed techniques. The disclosed system and method may allow for storage resource management and conflict arbitration with a t reduced reliance on SCSI resets.
According to an aspect of the present disclosure, a method for managing storage t resources in a clustered computing environment is provided. The method comprises the steps of: receiving a small computer system interface (SCSI) reservation command seeking to reserve a storage resource for a node of the clustered computing environment; and in response to the reservation command, issuing a small computer system Is interface persistent reserve out command with a service action of reserve to reserve the storage resource for the node.
In one implementation the reservation command is received by, and the persistent reserve out command is issued by a miniport driver.
In a specific implementation the method for managing storage resources in a clustered computing environment further includes the step of releasing a reservation held for the node by issuing a small computer system interface persistent reserve out command with a service action of clear.
[R:\LIBM45976.doc:9M A system and method incorporating teachings of the present disclosure may provide significant improvements over conventional cluster resource management Ssolutions. For example, the disclosed techniques may be operable to better manage and 00 arbitrate storage resource conflicts. A SCSI reset in a clustered computing environment can result in the initiation of an abort/wait/retry approach to several I/O processes, which can have a detrimental effect on overall system performance. The teachings of the present disclosure may help reduce reliance on SCSI rests and the resulting performance degradations.
CI
tt to, 837142-.1 -7- SIn addition, the teachings of the present disclosure may facilitate the avoidance of system reliability problems associated with SCSI resets in a clustered computing Senvironment. A conventional cluster resource "00 837142.1 00
OC
S8 management system, such as MSCS, may be unable to account for SCSI resets initiated during the bus disturbance of an earlier SCSI reset. This limitation may lead to Sunexpected behavior and decrease system reliability.
tn 5 Because the teachings of the present disclosure may Sfacilitate the avoidance of at least some SCSI resets, system reliability may be improved.
Other technical advantages should be apparent to one of ordinary skill in the art in view of the specification, claims, and drawings.
C00 oo o9 BRIEF DESCRIPTION OF THE DRAWINGS A more complete understanding of the present disclosure and advantages thereof may be acquired by referring to the following description taken in SS conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein: FIGURE 1 depicts a component diagram of a storage area network incluuding.bne e odiment of a resource management engine that incorporates teachings of the present disclosure; FIGURE 2 shows a flow diagram for one embodiment of a method for managing storage resources in a clustered computing environment in accordance with teachings of the present disclosure; and FIGURE 3 shows a flow diagram for another embodiment of a method for managing storage resources in a clustered computing environment in accordance with teachings of the present disclosure.
00 DETAILED DESCRIPTT~rq OF THE pISCLOSURE FIGURE 1 depicts a general block diagram of a storage .area network indicated .eeal~tl.
SAN 10 includes two clustered computing systems, clusters S 12 and 14. As depicted, cluster 12 includes node 16 and node 18, and cluster 14 includes node 20 and 22-. Nodes 1G, 18, 20, and 22 may be, for example, servers, -workstations, or other -network -computing devic'es. As depicted in FIGURE 1, cluster .12 may be supporting a number of -client devices such as the client personal computers represent atively depicted at 24.
SAN 10 may also include a storage pool 26, which may include, for example, a plurality of physical storage devices such as -hard disk drives under the control of and coupled to one or more storage controllers. The physical storage devices of storage pool 2G may be assigned LUNs.
Some physical storage devices may be grouped into RAID volumes with each volume assigned a single SCSI LUN address. Other physical storage devices may be individually assigned one or more LUNs. However the LUNs are assigned, the LUNs of FIGURE 1 may map the available physical storage of storage pool 26 into a plurality of logical storage devices and allow these logical storage devices to be identified and addressed.
In operation, nodes 1G, 18, 20, and 22 may communicate with and transfer data to and from storage pool 26 through fabric 28 using fibre channel protocol- As depicted in FIGURE 1, nodes 16 and 18 may be grouped into zone 30 with LUN_1 and LUN_2. Similarly, nodes and 22 may be grouped into zone 32 with LUN_3, LUN_4, and 00 C o11 Using switch zoning to create zone 30 may prevent nodes 16 and 18 from seeing nodes 20 and 22. Similarly, V) using switch zoning to create zone 32 may prevent nodes 0 20 and 22 from seeing nodes 16 and 18. In addition to 5 zoning, the embodiment of FIGURE 1 may employ LUN Smasking. LUN masking may blind a specific node or cluster from seeing certain LUNs. For example, LUN masking may prevent nodes 16 and:18 -from seeing LUN_3, LUN 4, and LUN In the embodiment of FIGURE 1, nodes 16, 18, 20, and 22 may be assigned a unique world wide name (WWN), which may be an eight byte identifier. The Institute of Electronics Engineers (IEEE) assigns blocks of WWNs to manufacturers so manufacturers .can build fiber channel devices with unique WWNs. For illustrative purposes, in the embodiment of FIGURE 1, node 16 may have a WWN of "AAA", node 1 may have a WWN of "BBB", node 20 may have a WWN of "CCC", and node 22 may have a WWN of "DDD". As such, nodes 16, 18, 20, and 22 may be uniquely identifiable by other devices coupled to fabric 28.
Nodes 16, 18, 20, and 22 may have identification information in addition to their respective WWNs. For example, according to the fibre channel protocol, when a node such as node 16 is initialized and logs into fabric 28, the node is assigned a fibre channel ID. This ID may be subject to change each time some initialization event occurs, for example, when another node or device logs into fabric 28. As depicted in FIGURE 1, fabric 28 has assigned fibre channel IDs as follows: node 16 is S ID 1, 00 C oo 12 node 18 is S ID 2, node 20 is S ID 3, and node 22 is C S ID4.
In the embodiment of FIGURE i, the yarious WWNs .and C, fibre channel IDs may be stored in a computer readable medium 34, which may be accessible to devices of SAN C) As shown in FIGURE 1, SAN 10 may include a computing device 38 for establishing fabric 28. Such a computing device:-mayinclude -a CPU.communicatively- coupled :to computer readable medium 34. Switch 36 may.also have at least one port 40 for interfacing with other devices to form an overall fibre channel network.
In one embodiment of a system incorporating teachings of the present disclosure, computing device 38 may be operable to execute a resource management engine, which may be stored in computer readable medium 34. The resource management engine may be operable to perform several functions. For example, the resource management engine may be operable -to access a maintained list of the WWNs and the fibre channel IDs of SAN 10 devices. In addition, the resource management engine may be operable to recognize a SCSI reset command issued by a node and to convert the command into a storage resource releasing command. The storage resource releasing command may be, for example, a third party process log out or a SCSI persistent reserve out command with a clear action.
In a typical MSCS cluster, a SCSI reset command may be issued when a node like node 18 or 20 fails to acknowledge receipt of a timely heartbeat 42 or 44 from a respective cluster mate. Heartbeats 42 and 44 may allow 00 13 nodes 18 and 22 respectively to "see" if their cluster
ON
mates are still functioning.
If, for example, node 18, can no.longer "see" node S16, node 18 may seek to have any LUN reservations held for node 16 released. To accomplish this release, node 0 18 may send a SCSI reset command to initiate a low-level bus reset of the SCSI busesassociated with nodes 16 and 18. In some systems, -for -example a SCS system, node 18 may wait some specified amount of time before trying to reserve the LUNs that had been reserved by node 16. The waiting allows node 16 to regain control of the LUNs reserved to it before the SCSI reset. As such, if node 16 is "alive" despite node 18's failure to receive heartbeat 42, node 16 may be able to re-establish its resource reservations and in so doing let node 18 know that it is "alive".
Unfortunately, as mentioned above, a SCSI reset in a clustered computing environment can have a detrimental effect on overall system performance and system reliability. The disclosed system and resource management engine may help limit a clustered computing environment's reliance on SCSI resets in several different ways. Example techniques for avoiding SCSI resets may be better understood through consideration of FIGUREs 2 and 3.
FIGURE 2 depicts a flow diagram of one embodiment of a method 100 for managing storage resources in a clustered computing environment. The method of FIGURE 2 may be implemented by a resource management engine executing on a storage controller attached to a SAN 00 14 f abric. In some embodiments, the resource management c-iengine may be executing on a CPU associated with a switch like switch 3 6 of. FIGURE 1. In other -embodiments the CPU may be associated with a SAN device other than the switch. For example, a resource management engine may be executing on one or more nodes of a SAN-.
During the operation of a SAN, a port login (,PLOGI) .c6mmand' may be received. As is' known in the art, a PLOGI command is a f ibre channel command wherein a node logs into a storage device attached to a SANT. A node may execute a PLOGI command after the fabric has assigned a f ibre channel I) (SID) to the node. As is also conventionally known, the S_-ID of a node may be assigned when a node executes a f abric login (FLOGI) command.
At step 102, the SID -and the WWN of a cluster node may be. extracted. The extraction may occur at different times. For example, the extraction may occur when a -node issues a PLOGI command. Once extracted, the SID and the WWN may be updated and may be stored in a computer readable medium. In some embodiments, this computer readable medium may be part of a SAN and may be accessible to several devices of the SAN.- At step 104, a LUN reservation may be held for a given node. In effect, the given node may have the exclusive right to use the reserved LUN. As is mentioned above, cluster nodes often communicate with one another using a heartbeat signal. At step 106, a SAN device may detect a failure to receive a timely heartbeat signal.
Though the failure to receive a heartbeat signal may only indicate a failed communication link between the 00 heartbeat sender and the heartbeat receiver, the failure may result, as shown at step 108, in the determination t that a cluster node is inoperable.
In the embodiment of FIGURE 2, the determination that a node is inoperable, may cause another node to issue a SCSI reset. As shown at step 110, a SCSI reset (C7 command may be sent to release LUN reservations held for the node believed to be inoperable (the-"dead" node).. At step 112, the SCSI reset command may-be converted into a third party process log out. This conversion may, for example, be performed by an executing resource management engine.
At step 114 a log out command for the "dead" node may be sent on the "dead" node's behalf by a third party.
For example, a resource management engine may access a computer readable medium storing the "dead" node's S ID and WWN. The resource management engine may use the S ID and the WWN of the "dead" node to log out the "dead" node. This third party process log out may result in the releasing of LUN reservations held for the logged out node.
As shown at step 116 of FIGURE 2, other nodes of a cluster may also log out or be logged out and a loop initialization protocol (LIP) link reset may be initiated. The LIP link reset of step 118 may be followed by step 120's generation of a state change notification. In the embodiment of FIGURE 2, the state change notification may cause active cluster nodes, nodes that are not dead, to perform a port login and to seek LUN reservations. The port login of active cluster nodes 00 16 may be seen at step 122; If the "dead" node was not dead, it may be able to regain its LUN reservations. If the "dead" nodew-as -dead, other. cluster nodes. may nowbe, able to capture the LUN reservations held by the "dead, node.' In effect, the storage resources held by the dead node will be made available to "live" nodes -resulting in a better utilization of storage resources -without a SCSI reset.
Another embodiment of a method 200 for managing storage resources in a clustered computing environment may be seen in FIGURE 3. The method of FIGURE 3, like the method of FIGURE 2, may be implemented by a resource management engine. This engine may be located any number of places. For example, the engine may be located at a switch, a node, or a storage control attached to a Fibre channel fabric.
As shown at step 202, method 200 may involve the receiving of a SCSI LUN reservation command. A typical SCSI reservation command may be cleared with a SCSI reset. As mentioned Above, SCSI resets may cause a number of problems within a clustered computing environment. As such, at step 204, 'the SCSI reserve command may be converted to a SCSI persistent reserve out command with a service action of RESERVE. The conversion from SCSI reserve to SCSI persistent reserve may be performed, for example, by an executing resource management engine- The persistent reserve out command may hold a persistent LTJN reservation as shown at step 206 for the holding node, the node issuing the SCSI reserve command.
17 At step 208, it may be determined that the holding Snode is inoperable. In response to this determination, a *n SCSI reset command may be issued. The. SCSI reset command Sof step 210 may be converted at step 212 to a SCSI S persistent reserve command with a service action of SCLEAR. In operation, the SCSI persistent reserve command with a service action of CLEAR may release the LUN reservations held by the initial SCSI persistent reserve out command. The LUN releasing of step 214 may effectively release storage resources held by nodes determined to be inoperable at step 208. This may result in a better utilization of storage.resources within a clustered computing environment, and the better utilization may be accomplished without employing SCSI resets.
Various changes to the above embodiments are contemplated by the present disclosure. For example, embodiments of the present disclosure may be implemented in SANs having any number of topologies. There may be, for example, numerous storage controllers, there may be a resource management engine executing on each node of a cluster, or there may be a single resource management engine executing within each zone of a clustered computing environment.
Although the disclosed embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made to the embodiments without departing from their spirit and scope.

Claims (4)

1. A method for managing storage resources in a clustered computing environment, 00 the method comprising: receiving a small computer system interface (SCSI) reservation command seeking to reserve a storage resource for a node of the clustered computing environment; and in response to the reservation command, issuing a small computer system N interface persistent reserve out command with a service action of reserve to reserve the storage resource for the node.
2. The method of claim 1, wherein a miniport driver receives the reservation command and issues the persistent reserve out command.
3. The method of claim 1 or claim 2, further comprising releasing a reservation held for the node by issuing a small computer system interface persistent reserve out command with a service action of clear.
4. A method for managing storage resources in a clustered computing environment, said method being substantially as described herein with reference to the accompanying drawings. DATED this Twentieth Day of June, 2007 Dell Products, L.P. Patent Attorneys for the Applicant SPRUSON FERGUSON
837142-1
AU2005200529A 2000-03-09 2005-02-08 System and method for managing storage resources in a clustered computing environment Ceased AU2005200529B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2007202999A AU2007202999B2 (en) 2000-03-09 2007-06-28 System and method for managing storage resources in a clustered computing environment

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/524401 2000-03-09
US09/524,401 US6622163B1 (en) 2000-03-09 2000-03-09 System and method for managing storage resources in a clustered computing environment
AU19714/01A AU780496B2 (en) 2000-03-09 2001-02-12 System and method for managing storage resources in a clustered computing environment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU19714/01A Division AU780496B2 (en) 2000-03-09 2001-02-12 System and method for managing storage resources in a clustered computing environment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
AU2007202999A Division AU2007202999B2 (en) 2000-03-09 2007-06-28 System and method for managing storage resources in a clustered computing environment

Publications (2)

Publication Number Publication Date
AU2005200529A1 AU2005200529A1 (en) 2005-03-03
AU2005200529B2 true AU2005200529B2 (en) 2007-07-19

Family

ID=38329974

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2005200529A Ceased AU2005200529B2 (en) 2000-03-09 2005-02-08 System and method for managing storage resources in a clustered computing environment

Country Status (1)

Country Link
AU (1) AU2005200529B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10547589B2 (en) 2016-05-09 2020-01-28 Cisco Technology, Inc. System for implementing a small computer systems interface protocol over a content centric network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0033915A1 (en) * 1980-02-07 1981-08-19 HONEYWELL INFORMATION SYSTEMS ITALIA S.p.A. Method for releasing common memory resources in a multiprocessor system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0033915A1 (en) * 1980-02-07 1981-08-19 HONEYWELL INFORMATION SYSTEMS ITALIA S.p.A. Method for releasing common memory resources in a multiprocessor system

Also Published As

Publication number Publication date
AU2005200529A1 (en) 2005-03-03

Similar Documents

Publication Publication Date Title
AU780496B2 (en) System and method for managing storage resources in a clustered computing environment
IE84170B1 (en) System and method for managing storage resources in clustered computing environment
US9507524B1 (en) In-band management using an intelligent adapter and methods thereof
US7921431B2 (en) N-port virtualization driver-based application programming interface and split driver implementation
US7467191B1 (en) System and method for failover using virtual ports in clustered systems
US20090089609A1 (en) Cluster system wherein failover reset signals are sent from nodes according to their priority
US20060156055A1 (en) Storage network that includes an arbiter for managing access to storage resources
US20070022314A1 (en) Architecture and method for configuring a simplified cluster over a network with fencing and quorum
CN105892943A (en) Access method and system for block storage data in distributed storage system
US7836351B2 (en) System for providing an alternative communication path in a SAS cluster
JP2009075718A (en) Method of managing virtual i/o path, information processing system, and program
US10782889B2 (en) Fibre channel scale-out with physical path discovery and volume move
US8621059B1 (en) System and method for distributing enclosure services data to coordinate shared storage
AU2005200529B2 (en) System and method for managing storage resources in a clustered computing environment
US9454305B1 (en) Method and system for managing storage reservation
GB2379769A (en) System and method for managing storage resources in a clustered computing environment
IE84046B1 (en) System and method for managing storage resources in a clustered computing environment
IE83771B1 (en) System and method for managing storage resources in a clustered computing environment
AU2007202999B2 (en) System and method for managing storage resources in a clustered computing environment
GB2387252A (en) System and method for managing storage resources in a clustered computing environment
US9436654B1 (en) Methods and systems for processing task management functions in a cluster having an intelligent storage adapter

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired