US7702757B2 - Method, apparatus and program storage device for providing control to a networked storage architecture - Google Patents

Method, apparatus and program storage device for providing control to a networked storage architecture Download PDF

Info

Publication number
US7702757B2
US7702757B2 US10/819,695 US81969504A US7702757B2 US 7702757 B2 US7702757 B2 US 7702757B2 US 81969504 A US81969504 A US 81969504A US 7702757 B2 US7702757 B2 US 7702757B2
Authority
US
United States
Prior art keywords
storage device
controller
storage
file
controllers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/819,695
Other versions
US20050234916A1 (en
Inventor
Lyle Bergman
Dave Ebsen
Randal S. Rysavy
Timothy W. Swatosh
Jeffrey L. Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiotech Corp
Original Assignee
Xiotech Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiotech Corp filed Critical Xiotech Corp
Priority to US10/819,695 priority Critical patent/US7702757B2/en
Assigned to XIOTECH CORPORATION reassignment XIOTECH CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RYSAVY, RANDY, SWATOSH, TIMOTHY W., WILLIAMS, JEFFREY L., BERGMAN, LYLE, EBSEN, DAVE
Publication of US20050234916A1 publication Critical patent/US20050234916A1/en
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECURITY AGREEMENT Assignors: XIOTECH CORPORATION
Assigned to HORIZON TECHNOLOGY FUNDING COMPANY V LLC, SILICON VALLEY BANK reassignment HORIZON TECHNOLOGY FUNDING COMPANY V LLC SECURITY AGREEMENT Assignors: XIOTECH CORPORATION
Application granted granted Critical
Publication of US7702757B2 publication Critical patent/US7702757B2/en
Assigned to XIOTECH CORPORATION reassignment XIOTECH CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: HORIZON TECHNOLOGY FUNDING COMPANY V LLC
Assigned to XIOTECH CORPORATION reassignment XIOTECH CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers

Definitions

  • This invention relates in general to computer storage systems, and more particularly to a method, apparatus and program storage device for providing control to a networked storage architecture.
  • Distributed computing systems may include two or more nodes, which may be employed to perform a computing task.
  • a node is a group of circuitry designed to perform one or more computing tasks.
  • a node may include one or more processors, a memory and interface circuitry.
  • a cluster is a group of two or more nodes that have the capability of exchanging data between nodes.
  • a particular computing task may be performed upon one node while other nodes perform unrelated computing tasks.
  • components of a particular computing task may be distributed among the nodes to decrease the time required to perform the computing task as a whole.
  • a processor is a device configured to perform an operation upon one more operands to produce a result. The operations may be performed in response to instructions executed by the processor.
  • Clustering is a popular strategy for implementing parallel processing applications because it allows system administrators to leverage already existing servers, computers and workstations. Clustering is also useful for load balancing to distribute processing and communications activity evenly across a network system so that no single server is overwhelmed. For example, if one server is running the risk of being swamped, requests may be forwarded to another clustered server with greater capacity. Clustering also provides for increased scalability by allowing new components to be added as the system load increases. In addition, clustering simplifies the management of groups of systems and their applications by allowing the system administrator to manage an entire group as a single system. Clustering may also be used to increase the fault tolerance of a network system. For example, if one server suffers an unexpected software or hardware failure, another clustered server may assume the operations of the failed server.
  • Clustering may be implemented in computer networks utilizing storage area networks (SAN) and similar networking environments.
  • SAN networks allow storage systems to be shared among multiple clusters and/or servers.
  • Nodes within a cluster may have one or more storage devices coupled to the nodes.
  • a storage device is a persistent device capable of storing large amounts of data.
  • a storage device may be a magnetic storage device such as a disk device or optical storage device such as a compact disc device.
  • a disk device is only one example of a storage device, the term “disk” may be used interchangeably with “storage device” throughout this specification.
  • Nodes physically connected to a storage device may access the storage device directly.
  • a storage device may be physically connected to one or more nodes of a cluster, but the storage device may not be physically connected to all the nodes of a cluster.
  • the nodes that are not physically connected to a storage device may not access that storage device directly.
  • a node not physically connected to a storage device may indirectly access the storage device via a data communication link connecting the nodes.
  • a node may access any storage device within a cluster as if the storage device is physically connected to the node.
  • some applications such as the Oracle Parallel Server, may require all storage devices in a cluster to be accessed via normal storage device semantics, e.g., Unix device semantics.
  • the storage devices that are not physically connected to a node but which appear to be physically connected to a node are called virtual devices or virtual disks.
  • a distributed virtual disk system is a software program operating on two or more nodes which provides an interface between a client and one or more storage devices and presents the appearance that the one or more storage devices are directly connected to the nodes.
  • a client is a program or subroutine that accesses a program to initiate an action.
  • a client may be an application program or an operating system subroutine.
  • a storage device mapping identifies to which nodes a storage device is physically connected and which disk device on those nodes corresponds to the storage device.
  • the node and disk device that map a virtual device to a storage device may be referred to as a node/disk pair.
  • the virtual device mapping may also contain permissions and other information. It is desirable that the mapping is persistent in the event of failures, such as a node failure.
  • a node is physically connected to a device if it can communicate with the device without the assistance of other nodes.
  • a cluster may implement a volume manager.
  • a volume manager is a tool for managing the storage resources of the cluster. For example, a volume manager may mirror two storage devices to create one highly available volume. In another embodiment, a volume manager may implement striping, which is storing portions of files across multiple storage devices. Conventional virtual disk systems cannot support a volume manager layered either above or below the storage devices.
  • a storage device path is a direct connection from a node to a storage device.
  • a data access request is a request to a storage device to read or write data.
  • nodes may have representations of a storage device.
  • conventional systems do not provide a reliable means of ensuring that the representations on each node have consistent permission data.
  • permission data identify which users have permission to access devices, directories or files. Permissions may include read permission, write permission or execute permission.
  • This capability is particularly important in clusters used in critical applications in which the cluster cannot be brought down. This capability allows physical resources (such as nodes and storage devices) to be added to the system, or repair and replacement to be accomplished without compromising data access requests within the cluster.
  • disaster tolerance In a data storage environment, disaster tolerance requirements include providing for replicated data and redundant storage to support recovery after the event. In order to provide a safe physical distance between the original data and the data to back up, the data must be migrated from one storage subsystem or physical site to another subsystem or site. It is also desirable for user applications to continue to run while data replication continues in the background. Data warehousing, continuous computing, and Enterprise Applications all require remote copy capabilities.
  • Storage controllers are commonly utilized in computer systems to off-load from the host computer certain lower level processing functions relating to I/O operations, and to serve as interface between the host computer and the physical storage media. Given the critical role played by the storage controller with respect to computer system I/O performance, it is desirable to minimize the potential for interrupted I/O service due to storage controller malfunction. Thus, prior workers in the art have developed various system design approaches in an attempt to achieve some degree of fault tolerance in the storage control function.
  • One prior method of providing storage system fault tolerance accomplishes failover through the use of two controllers coupled in an active/passive configuration. During failover, the passive controller takes over for the active (failing) controller.
  • a drawback to this type of dual configuration is that it cannot support load balancing, as only one controller is active and thus utilized at any given time, to increase overall system performance. Furthermore, the passive controller presents an inefficient use of system resources.
  • Failover is known in the art as a process by which a first storage controller coupled to a second controller assumes the responsibilities of the second controller when the second controller fails.
  • “Failback” is the reverse operation, wherein the second controller, having been either repaired or replaced, recovers control over its originally attached storage devices. Since each controller is capable of accessing the storage devices attached to the other controller as a result of the failover, there is no need to store and maintain a duplicate copy of the data, i.e., one set stored on the first controller's attached devices and a second (redundant) copy on the second controller's devices.
  • a method to track configurations is required.
  • the need to provide a consistent configuration and control mechanism across all controllers in the storage system is paramount in order to present a unified, functional storage system.
  • a way to transfer these configurations between controllers is needed to maintain this consistency.
  • one controller may be designated as a master to simplify control over the storage system. In such an arrangement, a way to provide remote control of multiple controllers from one controller is needed.
  • the present invention discloses a method, apparatus and program storage device for providing control to a networked storage architecture.
  • the present invention solves the above-described problems by providing a method to track shared configuration data.
  • the present invention also provides a way to transfer data including configuration data to each controller.
  • a file system is provided to control multiple controllers from one controller remotely, via synchronous bi-directional communications over a network.
  • the file system is stored in a commonly accessible networked storage device.
  • a heartbeat file may be used to indicate whether a particular slave controller is alive.
  • a system in accordance with the principles of the present invention includes at least one networked storage device and a plurality of controllers, coupled to the at least one networked storage device, for controlling input/output operations of the at least one networked storage device, wherein the at least one networked storage device includes a file system for storing data provided by a first of the plurality of controllers for retrieval by at least a second controller.
  • a method for providing control to a networked storage architecture includes generating data at a first controller, writing the data to at least one networked storage device, retrieving the data by at least a second controller and processing the retrieved data at the at least second controller.
  • This storage system includes means for providing networked storage and means for controlling the means for providing networked storage, wherein the means for providing networked storage includes means for storing files provided by the means for controlling the means for providing network storage for retrieval by the means for controlling the means for providing network storage.
  • a program storage device readable by a computer.
  • the program storage device tangibly embodies one or more programs of instructions executable by the computer to perform a method for providing control to a networked storage architecture, wherein the method includes generating data at a first controller, writing the data to at least one networked storage device, retrieving the data by at least a second controller and processing the retrieved data at the at least second controller.
  • FIG. 1 illustrates a storage system according to an embodiment of the present invention
  • FIG. 2 is a simplified block diagram showing the configuration of a distributed network computer storage system according to an embodiment of the present invention
  • FIG. 3 illustrates a simplified view of controllers and storage devices according to an embodiment of the present invention
  • FIG. 4 illustrates a file system for sharing data between controllers according to an embodiment of the present invention
  • FIG. 5 illustrates the control of multiple computers from one computer remotely, via synchronous bi-directional communications over a network using a commonly accessible networked storage device, and dedicated input/output and heartbeat files according to an embodiment of the present invention
  • FIG. 6 illustrates a flow chart for building client information according to an embodiment of the present invention.
  • FIG. 7 is a flow chart of the method for communicating between controllers according to an embodiment of the present invention.
  • the present invention provides a method, apparatus and program storage device for providing control to a networked storage architecture.
  • the present invention provides a method to track shared configuration data.
  • the present invention also provides a way to transfer data including configuration data to each controller.
  • a file system is provided to control multiple controllers from one controller remotely, via synchronous bi-directional communications over a network.
  • the file system is stored in a commonly accessible networked storage device.
  • a heartbeat file may be used to indicate whether a particular slave controller is alive.
  • FIG. 1 illustrates a storage system 100 according to an embodiment of the present invention.
  • multiple users 110 are coupled to a network 112 .
  • Ethernet is one type of network 112 .
  • Ethernet is generally placed at the data link layer of the Open System Interconnect (OSI) 7-layer model, second from the bottom, but it also includes elements of the physical layer.
  • OSI Open System Interconnect
  • An access node 120 is coupled to a storage platform system 130 .
  • the access node 120 may be a server that is accessed by the users via Ethernet, for example, as discussed above, a gateway device, etc.
  • the access node 120 may be coupled to the storage platform system 130 via a storage area network 122 , a point-to-point connection 124 , etc.
  • the virtual storage device 134 may include a pool of storage disks 132 that are managed by a management module as shown in FIG. 2 .
  • One function of the management module is to represent information on the disks 132 to the user as at least one virtual disk 134 , such as virtual disk volume.
  • the management module is connected to the array of disks 132 to control the allocation of data on the physical disks 132 .
  • the information on the array 132 is presented to the computer systems of the users 110 as one or more virtual disks 134 and information in the virtual disks 134 is mapped to the array 132 .
  • the storage platform system 130 may be expanded via a network connection 140 , e.g., IP Network, to a remote storage platform system 150 .
  • FIG. 2 is a simplified block diagram showing the configuration of a distributed network computer storage system 200 according to an embodiment of the present invention.
  • storage system 200 is connected by way of a fibre channel Storage Area Network (SAN) 218 to a plurality of SAN clients 220 .
  • SAN Storage Area Network
  • Each SAN client 220 is a computer such as generally called a personal computer or server computer and accesses the storage system 200 through a block I/O interface.
  • the storage system 200 includes a plurality of disk array controllers 230 and a plurality of storage devices 240 .
  • the disk array controllers 230 may be coupled to communicate with each other via a management network 250 .
  • the disk array controllers 230 are also connected to the storage devices 240 of the storage pool 260 .
  • the disk array controllers 230 may be connected through a fibre channel.
  • FIG. 3 illustrates a simplified view 300 of controllers and storage devices according to an embodiment of the present invention.
  • a master controller 310 and slave controllers 320 , 322 are shown.
  • Each of the controllers 310 , 320 , 322 includes memory 312 , 324 , 326 .
  • the memory 312 , 324 , 326 may include non-volatile random access memory.
  • Each of the controllers 310 , 320 , 322 may access storage 330 .
  • Storage 330 includes a file system 340 .
  • configuration is written to memory 312 on the master controller 310 and provided to the file system 340 .
  • Each of the remaining controllers 320 , 322 may access the configuration data from the file system 340 on at least one shared storage device 330 . All controllers 310 , 320 , 322 are then able to share a single configuration. Any of the slave controllers 320 , 322 may read the configuration from the storage device 330 and load the configuration data into their memory 324 , 326 and use it. Any configuration changes are also performed by a master controller 310 , saved to the memory 312 of the master controller 310 and written to the storage device 330 for access by any of the slave controllers 320 , 322 .
  • FIG. 4 illustrates a file system 400 for sharing data between controllers according to an embodiment of the present invention.
  • a file system 400 for a storage device in a shared pool is shown. This file system 400 may be replicated on each storage device in the shared storage device pool.
  • the file system 400 provides a way to communicate various data from one controller to another in a shared storage device pool.
  • the file system 400 may include a directory file 410 , which contains the list of files in the file system 400 .
  • the directory file 410 is used to locate files on the file system 400 .
  • the file system 400 is expandable and may be replicated on multiple devices to provide redundancy.
  • the file system 400 has a starting logical block address (LBA) 412 .
  • the directory file 410 includes an entry 420 for each file name. Each entry includes the file name 422 , file start LBA 424 and the file size 426 .
  • LBA logical block address
  • FIG. 5 is a block diagram 500 illustrating the control of multiple controllers from one controller remotely, via synchronous bi-directional communications over a network using a commonly accessible networked storage device, and dedicated input/output and heartbeat files according to an embodiment of the present invention.
  • Data such as configuration data, commands, instructions, heartbeat files, may be provided in the file system 540 so that any controller 510 , 520 , 522 may access the data from the network storage device 530 .
  • multiple controllers 510 , 520 , 522 may exchange commands or instructions.
  • such data may include instructions that may provide programs to be executed or system-level functions to be performed.
  • a master controller 510 may write 550 a command or instruction to a specific file in the file system 540 for each slave controller 520 , 522 on a commonly accessible network storage device 530 .
  • Each of the controllers 510 , 520 , 522 includes memory 512 , 524 , 526 .
  • Slave controllers 520 , 522 receiving a command or instruction are set up to periodically read 552 their specific files on a network storage device 530 to retrieve any command or instruction and then execute any retrieved instructions.
  • the slave controllers 520 , 522 must interpret the instruction and execute the instructions accordingly.
  • the master controller 510 needs to obtain feedback from the controllers 520 , 522 it is controlling.
  • the slave controllers 520 , 522 write to their own dedicated files in the file system 540 on the network storage device 530 where the master controller 510 can then read them.
  • Each slave controller 520 , 522 has its own dedicated file in the file system 540 .
  • File sharing between the controllers would add a layer of complexity and create its own set of problems, potentially making control risky.
  • the method of control according to an embodiment of the present invention does not use file sharing.
  • Each file in the file system 540 is input only or output only.
  • the master controller 510 will only write to the file that the master controller 510 uses to provide instructions for a particular slave 520 , 522 to execute. This slave 520 , 522 will read from this file. Timing the reads and writes generally prevents both operations from happening at the same time, although such an occurrence wouldn't create any file problems. More than one controller cannot write to the same file at the same time.
  • the present invention only requires a master program running on a master controller 510 , a program for each slave controller 520 , 522 and a network storage device 530 accessible by all controllers 510 , 520 , 522 .
  • the setup for the master 510 and slave 520 , 522 controllers is extremely easy, requiring only two pieces of information: giving a unique controller name for each slave 520 , 522 and the full network path to the commonly accessible storage device 530 .
  • there are no special protocols to load other than that needed for basic network communications because all communications are basic file operations.
  • FIG. 6 illustrates a flow chart 600 for building client information according to an embodiment of the present invention.
  • a common directory is requested for opening a client map 610 .
  • a determination is made whether the client map is opened successfully 612 . If no 614 , a warning is generated 616 and a determination is made whether the map is to be used anyway 618 . If not 620 , the system returns to the beginning 610 . If yes 622 , an indication that the client map exists is set 624 . A warning may be generated 626 . Then, the client number is requested 636 .
  • the client information in the command directory is set 632 . All client maps are read and the client map is assigned an array designation 634 . The client number is then requested 636 .
  • the client data was found or not. If the client data was found 638 , a determination is made whether the map is a duplicate 640 . If yes 642 , a decision is made whether to accept it anyway 644 . If no 646 , the system loops back to ask for a client number again 636 . If yes 648 , the duplicate number is set to 1 650 . If the client data was not found 652 , if the client data is not a duplicate 654 , or after the duplicate number is set to 1 650 , a decision is made whether the common directory incorrect flag is set 655 .
  • a warning is displayed that the common directory cannot be verified as being correct and the client number cannot be verified as being a duplicate 657 . If no 658 , a decision is made whether the data is confirmed 660 . If no 662 , the system loops back to begin again 610 . If yes 664 , the file is written 666 . The user may also decide to quit 670 , in which case the old values of the client number are reset and logged in to the common directory 672 .
  • FIG. 7 is a flow chart 700 of the method for communicating between controllers according to an embodiment of the present invention.
  • Data that may include a configuration file, a command or a response is generated 710 .
  • a first controller writes this data to at least one storage device that is accessible by the remaining controllers 720 .
  • the first controller may also write the data into its memory.
  • At least a second controller accesses the file to obtain the data for processing 730 .
  • the method for providing control to a networked storage architecture may be tangibly embodied in a computer-readable medium or carrier, e.g. one or more of the fixed and/or removable data storage devices 268 illustrated in FIG. 2 , or other data storage or data communications device.
  • the computer program 290 may be loaded into the memory 292 to configure the processor 267 of FIG. 2 , for execution.
  • the computer program 290 comprise instructions which, when read and executed by the processor 267 of FIG. 2 , causes a controller 230 to perform the steps necessary to execute the steps or elements of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method, apparatus and program storage device for providing control to a networked storage architecture is disclosed. A networked storage device is provided. Controllers are coupled to the at least one networked storage device for controlling input/output operations of the networked storage device. The networked storage device includes a file system for storing data provided by a first of the controllers for retrieval by the other controllers.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates in general to computer storage systems, and more particularly to a method, apparatus and program storage device for providing control to a networked storage architecture.
2. Description of Related Art
Distributed computing systems, such as clusters, may include two or more nodes, which may be employed to perform a computing task. Generally speaking, a node is a group of circuitry designed to perform one or more computing tasks. A node may include one or more processors, a memory and interface circuitry. Generally speaking, a cluster is a group of two or more nodes that have the capability of exchanging data between nodes. A particular computing task may be performed upon one node while other nodes perform unrelated computing tasks. Alternatively, components of a particular computing task may be distributed among the nodes to decrease the time required to perform the computing task as a whole. Generally speaking, a processor is a device configured to perform an operation upon one more operands to produce a result. The operations may be performed in response to instructions executed by the processor.
Clustering is a popular strategy for implementing parallel processing applications because it allows system administrators to leverage already existing servers, computers and workstations. Clustering is also useful for load balancing to distribute processing and communications activity evenly across a network system so that no single server is overwhelmed. For example, if one server is running the risk of being swamped, requests may be forwarded to another clustered server with greater capacity. Clustering also provides for increased scalability by allowing new components to be added as the system load increases. In addition, clustering simplifies the management of groups of systems and their applications by allowing the system administrator to manage an entire group as a single system. Clustering may also be used to increase the fault tolerance of a network system. For example, if one server suffers an unexpected software or hardware failure, another clustered server may assume the operations of the failed server.
Clustering may be implemented in computer networks utilizing storage area networks (SAN) and similar networking environments. SAN networks allow storage systems to be shared among multiple clusters and/or servers. Nodes within a cluster may have one or more storage devices coupled to the nodes. Generally speaking, a storage device is a persistent device capable of storing large amounts of data. For example, a storage device may be a magnetic storage device such as a disk device or optical storage device such as a compact disc device. Although a disk device is only one example of a storage device, the term “disk” may be used interchangeably with “storage device” throughout this specification. Nodes physically connected to a storage device may access the storage device directly. A storage device may be physically connected to one or more nodes of a cluster, but the storage device may not be physically connected to all the nodes of a cluster. The nodes that are not physically connected to a storage device may not access that storage device directly. In some clusters, a node not physically connected to a storage device may indirectly access the storage device via a data communication link connecting the nodes.
It may be advantageous to allow a node to access any storage device within a cluster as if the storage device is physically connected to the node. For example, some applications, such as the Oracle Parallel Server, may require all storage devices in a cluster to be accessed via normal storage device semantics, e.g., Unix device semantics. The storage devices that are not physically connected to a node but which appear to be physically connected to a node are called virtual devices or virtual disks. Generally speaking, a distributed virtual disk system is a software program operating on two or more nodes which provides an interface between a client and one or more storage devices and presents the appearance that the one or more storage devices are directly connected to the nodes. Generally speaking, a client is a program or subroutine that accesses a program to initiate an action. A client may be an application program or an operating system subroutine.
Unfortunately, conventional virtual disk systems do not guarantee a consistent virtual disk mapping. Generally speaking, a storage device mapping identifies to which nodes a storage device is physically connected and which disk device on those nodes corresponds to the storage device. The node and disk device that map a virtual device to a storage device may be referred to as a node/disk pair. The virtual device mapping may also contain permissions and other information. It is desirable that the mapping is persistent in the event of failures, such as a node failure. A node is physically connected to a device if it can communicate with the device without the assistance of other nodes.
A cluster may implement a volume manager. A volume manager is a tool for managing the storage resources of the cluster. For example, a volume manager may mirror two storage devices to create one highly available volume. In another embodiment, a volume manager may implement striping, which is storing portions of files across multiple storage devices. Conventional virtual disk systems cannot support a volume manager layered either above or below the storage devices.
Other desirable features include high availability of data access requests such that data access requests are reliably performed in the presence of failures, such as a node failure or a storage device path failure. Generally speaking, a storage device path is a direct connection from a node to a storage device. Generally speaking, a data access request is a request to a storage device to read or write data.
In a virtual disk system, multiple nodes may have representations of a storage device. Unfortunately, conventional systems do not provide a reliable means of ensuring that the representations on each node have consistent permission data. Generally speaking, permission data identify which users have permission to access devices, directories or files. Permissions may include read permission, write permission or execute permission.
Still further, it is desirable to have the capability of adding or removing nodes from a cluster or to change the connection of existing nodes to storage devices while the cluster is operating. This capability is particularly important in clusters used in critical applications in which the cluster cannot be brought down. This capability allows physical resources (such as nodes and storage devices) to be added to the system, or repair and replacement to be accomplished without compromising data access requests within the cluster.
It is also desirable to provide the ability for rapid recovery of user data from a disaster or significant error event at a data processing facility. This type of capability is often termed “disaster tolerance.” In a data storage environment, disaster tolerance requirements include providing for replicated data and redundant storage to support recovery after the event. In order to provide a safe physical distance between the original data and the data to back up, the data must be migrated from one storage subsystem or physical site to another subsystem or site. It is also desirable for user applications to continue to run while data replication continues in the background. Data warehousing, continuous computing, and Enterprise Applications all require remote copy capabilities.
Storage controllers are commonly utilized in computer systems to off-load from the host computer certain lower level processing functions relating to I/O operations, and to serve as interface between the host computer and the physical storage media. Given the critical role played by the storage controller with respect to computer system I/O performance, it is desirable to minimize the potential for interrupted I/O service due to storage controller malfunction. Thus, prior workers in the art have developed various system design approaches in an attempt to achieve some degree of fault tolerance in the storage control function.
One prior method of providing storage system fault tolerance accomplishes failover through the use of two controllers coupled in an active/passive configuration. During failover, the passive controller takes over for the active (failing) controller. A drawback to this type of dual configuration is that it cannot support load balancing, as only one controller is active and thus utilized at any given time, to increase overall system performance. Furthermore, the passive controller presents an inefficient use of system resources.
Another approach to storage controller fault tolerance is based on a process called “failover.” Failover is known in the art as a process by which a first storage controller coupled to a second controller assumes the responsibilities of the second controller when the second controller fails. “Failback” is the reverse operation, wherein the second controller, having been either repaired or replaced, recovers control over its originally attached storage devices. Since each controller is capable of accessing the storage devices attached to the other controller as a result of the failover, there is no need to store and maintain a duplicate copy of the data, i.e., one set stored on the first controller's attached devices and a second (redundant) copy on the second controller's devices.
However, in a multi-controller system with a shared configuration, a method to track configurations is required. The need to provide a consistent configuration and control mechanism across all controllers in the storage system is paramount in order to present a unified, functional storage system. In addition, a way to transfer these configurations between controllers is needed to maintain this consistency. In addition, one controller may be designated as a master to simplify control over the storage system. In such an arrangement, a way to provide remote control of multiple controllers from one controller is needed.
It can be seen then that there is a need for a method, apparatus and program storage device for providing control to a networked storage architecture.
SUMMARY OF THE INVENTION
To overcome the limitations in the prior art described above, and to overcome other limitations that will become apparent upon reading and understanding the present specification, the present invention discloses a method, apparatus and program storage device for providing control to a networked storage architecture.
The present invention solves the above-described problems by providing a method to track shared configuration data. The present invention also provides a way to transfer data including configuration data to each controller. A file system is provided to control multiple controllers from one controller remotely, via synchronous bi-directional communications over a network. The file system is stored in a commonly accessible networked storage device. A heartbeat file may be used to indicate whether a particular slave controller is alive.
A system in accordance with the principles of the present invention includes at least one networked storage device and a plurality of controllers, coupled to the at least one networked storage device, for controlling input/output operations of the at least one networked storage device, wherein the at least one networked storage device includes a file system for storing data provided by a first of the plurality of controllers for retrieval by at least a second controller.
In another embodiment of the present invention, a method for providing control to a networked storage architecture is provided. The method includes generating data at a first controller, writing the data to at least one networked storage device, retrieving the data by at least a second controller and processing the retrieved data at the at least second controller.
In another embodiment of the present invention, another storage system is provided. This storage system includes means for providing networked storage and means for controlling the means for providing networked storage, wherein the means for providing networked storage includes means for storing files provided by the means for controlling the means for providing network storage for retrieval by the means for controlling the means for providing network storage.
In another embodiment of the present invention, a program storage device readable by a computer is provided. The program storage device tangibly embodies one or more programs of instructions executable by the computer to perform a method for providing control to a networked storage architecture, wherein the method includes generating data at a first controller, writing the data to at least one networked storage device, retrieving the data by at least a second controller and processing the retrieved data at the at least second controller.
These and various other advantages and features of novelty which characterize the invention are pointed out with particularity in the claims annexed hereto and form a part hereof. However, for a better understanding of the invention, its advantages, and the objects obtained by its use, reference should be made to the drawings which form a further part hereof, and to accompanying descriptive matter in which there are illustrated and described specific examples of an apparatus in accordance with the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
FIG. 1 illustrates a storage system according to an embodiment of the present invention;
FIG. 2 is a simplified block diagram showing the configuration of a distributed network computer storage system according to an embodiment of the present invention;
FIG. 3 illustrates a simplified view of controllers and storage devices according to an embodiment of the present invention;
FIG. 4 illustrates a file system for sharing data between controllers according to an embodiment of the present invention;
FIG. 5 illustrates the control of multiple computers from one computer remotely, via synchronous bi-directional communications over a network using a commonly accessible networked storage device, and dedicated input/output and heartbeat files according to an embodiment of the present invention;
FIG. 6 illustrates a flow chart for building client information according to an embodiment of the present invention; and
FIG. 7 is a flow chart of the method for communicating between controllers according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
In the following description of the embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration the specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized because structural changes may be made without departing from the scope of the present invention.
The present invention provides a method, apparatus and program storage device for providing control to a networked storage architecture. The present invention provides a method to track shared configuration data. The present invention also provides a way to transfer data including configuration data to each controller. A file system is provided to control multiple controllers from one controller remotely, via synchronous bi-directional communications over a network. The file system is stored in a commonly accessible networked storage device. A heartbeat file may be used to indicate whether a particular slave controller is alive.
FIG. 1 illustrates a storage system 100 according to an embodiment of the present invention. In FIG. 1, multiple users 110 are coupled to a network 112. For example, Ethernet is one type of network 112. Ethernet is generally placed at the data link layer of the Open System Interconnect (OSI) 7-layer model, second from the bottom, but it also includes elements of the physical layer.
An access node 120 is coupled to a storage platform system 130. The access node 120 may be a server that is accessed by the users via Ethernet, for example, as discussed above, a gateway device, etc. The access node 120 may be coupled to the storage platform system 130 via a storage area network 122, a point-to-point connection 124, etc.
To the user 110, the storage platform system 130 appears as virtual storage device 134. The virtual storage device 134 may include a pool of storage disks 132 that are managed by a management module as shown in FIG. 2. One function of the management module is to represent information on the disks 132 to the user as at least one virtual disk 134, such as virtual disk volume.
The management module is connected to the array of disks 132 to control the allocation of data on the physical disks 132. The information on the array 132 is presented to the computer systems of the users 110 as one or more virtual disks 134 and information in the virtual disks 134 is mapped to the array 132. The storage platform system 130 may be expanded via a network connection 140, e.g., IP Network, to a remote storage platform system 150.
FIG. 2 is a simplified block diagram showing the configuration of a distributed network computer storage system 200 according to an embodiment of the present invention. In this embodiment of the invention, storage system 200 is connected by way of a fibre channel Storage Area Network (SAN) 218 to a plurality of SAN clients 220. Each SAN client 220 is a computer such as generally called a personal computer or server computer and accesses the storage system 200 through a block I/O interface. The storage system 200 includes a plurality of disk array controllers 230 and a plurality of storage devices 240. The disk array controllers 230 may be coupled to communicate with each other via a management network 250. The disk array controllers 230 are also connected to the storage devices 240 of the storage pool 260. The disk array controllers 230 may be connected through a fibre channel.
FIG. 3 illustrates a simplified view 300 of controllers and storage devices according to an embodiment of the present invention. In FIG. 3, a master controller 310 and slave controllers 320, 322 are shown. Each of the controllers 310, 320, 322 includes memory 312, 324, 326. For example, the memory 312, 324, 326 may include non-volatile random access memory. Each of the controllers 310, 320, 322 may access storage 330. Storage 330 includes a file system 340.
To maintain configuration consistency, configuration is written to memory 312 on the master controller 310 and provided to the file system 340. Each of the remaining controllers 320, 322 may access the configuration data from the file system 340 on at least one shared storage device 330. All controllers 310, 320, 322 are then able to share a single configuration. Any of the slave controllers 320, 322 may read the configuration from the storage device 330 and load the configuration data into their memory 324, 326 and use it. Any configuration changes are also performed by a master controller 310, saved to the memory 312 of the master controller 310 and written to the storage device 330 for access by any of the slave controllers 320, 322.
FIG. 4 illustrates a file system 400 for sharing data between controllers according to an embodiment of the present invention. In FIG. 4, a file system 400 for a storage device in a shared pool is shown. This file system 400 may be replicated on each storage device in the shared storage device pool. The file system 400 provides a way to communicate various data from one controller to another in a shared storage device pool. The file system 400 may include a directory file 410, which contains the list of files in the file system 400. The directory file 410 is used to locate files on the file system 400. The file system 400 is expandable and may be replicated on multiple devices to provide redundancy. The file system 400 has a starting logical block address (LBA) 412. The directory file 410 includes an entry 420 for each file name. Each entry includes the file name 422, file start LBA 424 and the file size 426.
FIG. 5 is a block diagram 500 illustrating the control of multiple controllers from one controller remotely, via synchronous bi-directional communications over a network using a commonly accessible networked storage device, and dedicated input/output and heartbeat files according to an embodiment of the present invention. Data, such as configuration data, commands, instructions, heartbeat files, may be provided in the file system 540 so that any controller 510, 520, 522 may access the data from the network storage device 530. Thus, multiple controllers 510, 520, 522 may exchange commands or instructions. For example, such data may include instructions that may provide programs to be executed or system-level functions to be performed.
A master controller 510 may write 550 a command or instruction to a specific file in the file system 540 for each slave controller 520, 522 on a commonly accessible network storage device 530. Each of the controllers 510, 520, 522 includes memory 512, 524, 526. Slave controllers 520, 522 receiving a command or instruction are set up to periodically read 552 their specific files on a network storage device 530 to retrieve any command or instruction and then execute any retrieved instructions. The slave controllers 520, 522 must interpret the instruction and execute the instructions accordingly.
The master controller 510 needs to obtain feedback from the controllers 520, 522 it is controlling. The slave controllers 520, 522 write to their own dedicated files in the file system 540 on the network storage device 530 where the master controller 510 can then read them. Each slave controller 520, 522 has its own dedicated file in the file system 540.
File sharing between the controllers would add a layer of complexity and create its own set of problems, potentially making control risky. However, the method of control according to an embodiment of the present invention does not use file sharing. Each file in the file system 540 is input only or output only. For example, the master controller 510 will only write to the file that the master controller 510 uses to provide instructions for a particular slave 520, 522 to execute. This slave 520, 522 will read from this file. Timing the reads and writes generally prevents both operations from happening at the same time, although such an occurrence wouldn't create any file problems. More than one controller cannot write to the same file at the same time.
Accordingly, the present invention only requires a master program running on a master controller 510, a program for each slave controller 520, 522 and a network storage device 530 accessible by all controllers 510, 520, 522. In addition, the setup for the master 510 and slave 520, 522 controllers is extremely easy, requiring only two pieces of information: giving a unique controller name for each slave 520, 522 and the full network path to the commonly accessible storage device 530. Moreover, there are no special protocols to load other than that needed for basic network communications because all communications are basic file operations.
FIG. 6 illustrates a flow chart 600 for building client information according to an embodiment of the present invention. In FIG. 6, a common directory is requested for opening a client map 610. A determination is made whether the client map is opened successfully 612. If no 614, a warning is generated 616 and a determination is made whether the map is to be used anyway 618. If not 620, the system returns to the beginning 610. If yes 622, an indication that the client map exists is set 624. A warning may be generated 626. Then, the client number is requested 636.
If the client map is opened successfully 630, the client information in the command directory is set 632. All client maps are read and the client map is assigned an array designation 634. The client number is then requested 636.
After the client number is requested, either the client data was found or not. If the client data was found 638, a determination is made whether the map is a duplicate 640. If yes 642, a decision is made whether to accept it anyway 644. If no 646, the system loops back to ask for a client number again 636. If yes 648, the duplicate number is set to 1 650. If the client data was not found 652, if the client data is not a duplicate 654, or after the duplicate number is set to 1 650, a decision is made whether the common directory incorrect flag is set 655. If yes 656, a warning is displayed that the common directory cannot be verified as being correct and the client number cannot be verified as being a duplicate 657. If no 658, a decision is made whether the data is confirmed 660. If no 662, the system loops back to begin again 610. If yes 664, the file is written 666. The user may also decide to quit 670, in which case the old values of the client number are reset and logged in to the common directory 672.
FIG. 7 is a flow chart 700 of the method for communicating between controllers according to an embodiment of the present invention. Data that may include a configuration file, a command or a response is generated 710. A first controller writes this data to at least one storage device that is accessible by the remaining controllers 720. The first controller may also write the data into its memory. At least a second controller accesses the file to obtain the data for processing 730.
Referring to FIG. 2, the method for providing control to a networked storage architecture according to embodiments of the present invention, which is described in detail with reference to FIGS. 3-7, may be tangibly embodied in a computer-readable medium or carrier, e.g. one or more of the fixed and/or removable data storage devices 268 illustrated in FIG. 2, or other data storage or data communications device. The computer program 290 may be loaded into the memory 292 to configure the processor 267 of FIG. 2, for execution. The computer program 290 comprise instructions which, when read and executed by the processor 267 of FIG. 2, causes a controller 230 to perform the steps necessary to execute the steps or elements of the present invention.
The foregoing description of the exemplary embodiment of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not with this detailed description, but rather by the claims appended hereto.

Claims (51)

1. A storage system, comprising:
a data storage device;
a plurality of controllers, each controller having access to data on the data storage device and being adapted to controlling input/output operations of the data storage device;
a networked configuration storage device commonly accessible to each controller the configuration storage device including a file system;
a first controller, in the plurality of controllers, which writes information that relates to control of the data storage device to the file system; and
a set, consisting of the remaining controllers, the controllers in the set reading the information written by the first controller, and processing the information.
2. The storage system of claim 1, wherein the information comprises a configuration file, the set of controllers being configured by the configuration file to provide a shared configuration among the plurality of controllers.
3. The storage system of claim 2, wherein the set of controllers periodically check the configuration file on the networked configuration storage device for configuration updates.
4. The storage system of claim 1, wherein the information comprises a command, the set of controllers each loading the command from the networked configuration storage device and processing the command.
5. The storage system of claim 1, wherein the information comprises an instruction, the set of controllers each loading the instruction from the networked configuration storage device and performing the retrieved instruction.
6. The storage system of claim 1, wherein the file system includes a directory file for locating files in the file system.
7. The storage system of claim 6, wherein the file system includes a respective heartbeat file corresponding to each controller in the set, the respective heartbeat file being updated by the corresponding controller periodically, thereby allowing the first controller to periodically check the respective heartbeat files on the file system to determine if the corresponding controllers are functioning.
8. The storage system of claim 1, wherein the file system includes a respective heartbeat file corresponding to each controller in the set, the respective heartbeat file being updated by the corresponding controller periodically, thereby allowing the first controller to periodically check the respective heartbeat files on the file system to determine if the corresponding controllers are functioning.
9. The storage system of claim 1, wherein the plurality of controllers include a memory for locally storing the data therein.
10. The storage system of claim 1, wherein the file system includes a directory file that includes an entry for each file in the file system, each entry including a file name, a start address and a file size indicator.
11. The storage system of claim 1, wherein the file system is expandable to allow any number of files in the file system.
12. The storage system of claim 1, wherein the file system includes a file for each of the plurality of controllers, each controller accessing its file to determine whether information has been added to the file.
13. The storage system of claim 12, wherein the information includes a configuration file, a command, a request, or an instruction.
14. The storage system of claim 1, wherein the first controller is a master controller and the set of controllers are slave controllers.
15. The storage system of claim 1, wherein the networked configuration storage device is a physical storage device.
16. The storage system of claim 1, wherein the set contains at least two controllers.
17. The storage system of claim 1, wherein a controller in the set writes information that relates to control of the data storage device to the file system; and another controller in the plurality of controllers reads that information and processes that information.
18. A method for providing control to a networked storage architecture, comprising:
generating information, relating to control of a data storage device, at a first controller;
writing the information to a networked configuration storage device that is commonly accessible by the first controller and a set of at least one other controller;
retrieving the information by each controller in the set; and
processing the retrieved information by each controller in the set, wherein the first controller and each controller in the set has access to data on the data storage device and is adapted to controlling input/output operations of the data storage device.
19. The method of claim 18, wherein the generating information comprises generating a configuration file for providing a shared configuration among the controllers.
20. The method of claim 19 further comprising periodically checking the networked configuration file by the set of controllers for configuration updates.
21. The method of claim 18, wherein the generating information comprises generating a command for processing by the set of controllers.
22. The method of claim 18, wherein the generating information comprises generating an instruction for performance by the set of controllers.
23. The method of claim 18, wherein the writing the information to a networked configuration storage device further includes writing the data to a file system for storing the data from the first controller.
24. The method of claim 23, wherein the writing the information to the file system further includes maintaining a directory file for locating files in the file system.
25. The method of claim 23, wherein the writing the information to the file system further includes providing by each controller in the set a respective heartbeat file that is updated periodically to allow the first controller to periodically verify whether each controller in the set is functioning.
26. The method of claim 18 further comprising writing the information into local memory of the controllers.
27. The method of claim 18, wherein the writing the information to a networked configuration storage device further includes writing the information to a file system and creating a directory file that includes an entry for each file in the file system.
28. The method of claim 27, wherein the creating a directory file includes providing a file name, a start address and a file size indicator for each entry in the directory file.
29. A program storage device readable by a computer, the program storage device tangibly embodying one or more programs of instructions executable by the computer to perform a method for providing control to a networked storage architecture, the method comprising:
generating information, relating to control of a data storage device, at a first controller;
writing the information to a networked configuration storage device that is commonly accessible by the first controller and a set of one or more other controllers;
retrieving the information by the set of controllers; and
processing the retrieved information by the set of controllers, wherein the first controller and each controller in the set has access to data on the data storage device and is adapted to controlling input/output operations of the data storage device.
30. A storage system, comprising:
a) a data storage pool, including a first storage device and a second storage device;
b) a management module, including a first controller and a set of one or more other controllers, the controllers in the management module being adapted to managing storage devices in the data storage pool;
c) a management storage device, commonly accessible by the controllers in the management module and containing data storage pool management information represented in tangible media; and
d) logic, tangibly embodied in instructions stored in digital media, whereby the first controller writes, to the management storage device, management information pertaining to the first storage device, and each controller in the set reads and processes the management information written by the first controller.
31. The storage system of claim 30, further comprising:
e) an interface, adapted to providing a plurality of devices, which are external to the storage system, access for input/output operations to the data storage pool, the input/output operations being under control of the management module.
32. The storage system of claim 30, wherein processing the management information affects configuration of data on the first storage device.
33. The storage system of claim 30, wherein the first storage device is a physical storage device, and the management information relates to presenting, through an interface for access by external devices, a virtual disk that uses storage on the first storage device.
34. The storage system of claim 30, wherein processing the management information affects an input/output operation on the first storage device.
35. The storage system of claim 30, further comprising:
e) a replicate of the data storage pool management information, in tangible media, in a storage device other than the management storage device.
36. The storage system of claim 30, wherein the first storage device is a virtual storage device.
37. The storage system of claim 30, wherein the management storage device is a physical storage device.
38. The storage system of claim 37, further comprising:
e) files in the management storage device dedicated respectively to each controller, whereby the first controller exchanges management information pertaining to the storage pool with controllers in the set; and
f) logic, tangibly embodied in instructions stored in digital media, whereby the controllers exchange management information using the management storage device.
39. The storage system of claim 30, further comprising:
e) logic, tangibly embodied in instructions stored in digital media, whereby a controller in the set writes, to the management storage device, management information pertaining to the second storage device, and the first controller reads, and processes the management information written by the controller in the set.
40. A method for coordinating control of a storage system, which includes a data storage pool that contains a first storage device and a second storage device, a management module that contains a first controller and a set of one or more other controllers, and a management storage device that includes data storage pool management information represented in tangible media, the method comprising:
a) writing, to the management storage device by the first controller, management information pertaining to the first storage device;
b) reading, by each controller in the set, the management information written by the first controller; and
c) processing, by each controller in the set, the management information written by the first controller.
41. The method of claim 40, further comprising:
d) receiving by the management module requests from a plurality of devices for input/output operations that access the first storage device.
42. The method of claim 40, wherein the information written by the first controller pertains to the first storage device, and processing by a controller in the set affects the first storage device.
43. The method of claim 40, wherein the processing affects configuration of data on the first storage device.
44. The method of claim 40, wherein the first storage device is a physical storage device, and the processing affects a virtual representation of storage, the virtual representation involving storage on the first storage device.
45. The method of claim 40, wherein the processing affects an input/output operation on the first storage device.
46. The method of claim 40, further comprising:
d) replicating the data storage pool management information, in tangible media, in a storage device other than the management storage device.
47. The method of claim 40, further comprising:
d) mapping, by the management module, storage on the first storage device to a virtual storage device; and
e) making the virtual storage device available for access by devices external to the storage system.
48. The method of claim 47, further comprising:
f) modifying, by the first controller, the mapping of storage on the first storage device, the management information pertaining to the mapping modification.
49. The method of claim 47, further comprising:
f) modifying, by a controller in the set, mapping of storage on the first storage device, the management information pertaining to the mapping.
50. The method of claim 40, wherein the set contains at least two controllers.
51. The method of claim 40, further comprising:
f) writing, to the management storage device by a controller in the set, management information pertaining to the second storage device;
g) reading, by the first controller, the management information written by controller in the set; and
h) processing, by the first controller, the management information written by controller in the set.
US10/819,695 2004-04-07 2004-04-07 Method, apparatus and program storage device for providing control to a networked storage architecture Expired - Fee Related US7702757B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/819,695 US7702757B2 (en) 2004-04-07 2004-04-07 Method, apparatus and program storage device for providing control to a networked storage architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/819,695 US7702757B2 (en) 2004-04-07 2004-04-07 Method, apparatus and program storage device for providing control to a networked storage architecture

Publications (2)

Publication Number Publication Date
US20050234916A1 US20050234916A1 (en) 2005-10-20
US7702757B2 true US7702757B2 (en) 2010-04-20

Family

ID=35097539

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/819,695 Expired - Fee Related US7702757B2 (en) 2004-04-07 2004-04-07 Method, apparatus and program storage device for providing control to a networked storage architecture

Country Status (1)

Country Link
US (1) US7702757B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9407516B2 (en) 2011-01-10 2016-08-02 Storone Ltd. Large scale storage system
US9448900B2 (en) 2012-06-25 2016-09-20 Storone Ltd. System and method for datacenters disaster recovery
US9612851B2 (en) 2013-03-21 2017-04-04 Storone Ltd. Deploying data-path-related plug-ins

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050210028A1 (en) * 2004-03-18 2005-09-22 Shoji Kodama Data write protection in a storage area network and network attached storage mixed environment
US8424012B1 (en) * 2004-11-15 2013-04-16 Nvidia Corporation Context switching on a video processor having a scalar execution unit and a vector execution unit
US7542980B2 (en) 2005-04-22 2009-06-02 Sap Ag Methods of comparing and merging business process configurations
US7958486B2 (en) * 2005-04-22 2011-06-07 Sap Ag Methods and systems for data-focused debugging and tracing capabilities
US20060242177A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Methods of exposing business application runtime exceptions at design time
US8539003B2 (en) * 2005-04-22 2013-09-17 Sap Ag Systems and methods for identifying problems of a business application in a customer support system
US20060242194A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Systems and methods for modeling and manipulating a table-driven business application in an object-oriented environment
US20060282458A1 (en) * 2005-04-22 2006-12-14 Igor Tsyganskiy Methods and systems for merging business process configurations
US20060241961A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Methods of optimizing legacy application layer control structure using refactoring
US20060242176A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Methods of exposing business configuration dependencies
US20060242197A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Methods of transforming application layer structure as objects
US20060242174A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Systems and methods for using object-oriented tools to debug business applications
US20060242188A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Methods of exposing a missing collection of application elements as deprecated
US20060293940A1 (en) * 2005-04-22 2006-12-28 Igor Tsyganskiy Methods and systems for applying intelligent filters and identifying life cycle events for data elements during business application debugging
US7702638B2 (en) * 2005-04-22 2010-04-20 Sap Ag Systems and methods for off-line modeling a business application
US20060242171A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Methods of using code-based case tools to verify application layer configurations
US7720879B2 (en) * 2005-04-22 2010-05-18 Sap Ag Methods of using an integrated development environment to configure business applications
US20060242172A1 (en) * 2005-04-22 2006-10-26 Igor Tsyganskiy Systems and methods for transforming logic entities of a business application into an object-oriented model
US8838850B2 (en) * 2008-11-17 2014-09-16 Violin Memory, Inc. Cluster control protocol
US20100274886A1 (en) * 2009-04-24 2010-10-28 Nelson Nahum Virtualized data storage in a virtualized server environment

Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5276877A (en) 1990-10-17 1994-01-04 Friedrich Karl S Dynamic computer system performance modeling interface
US5465337A (en) 1992-08-13 1995-11-07 Sun Microsystems, Inc. Method and apparatus for a memory management unit supporting multiple page sizes
EP0706130A1 (en) 1994-10-07 1996-04-10 International Business Machines Corporation Contiguous memory allocation process
US5768623A (en) * 1995-09-19 1998-06-16 International Business Machines Corporation System and method for sharing multiple storage arrays by dedicating adapters as primary controller and secondary controller for arrays reside in different host computers
US6061709A (en) 1998-07-31 2000-05-09 Integrated Systems Design Center, Inc. Integrated hardware and software task control executive
US6157963A (en) * 1998-03-24 2000-12-05 Lsi Logic Corp. System controller with plurality of memory queues for prioritized scheduling of I/O requests from priority assigned clients
US6219753B1 (en) 1999-06-04 2001-04-17 International Business Machines Corporation Fiber channel topological structure and method including structure and method for raid devices and controllers
US20030046606A1 (en) 2001-08-30 2003-03-06 International Business Machines Corporation Method for supporting user level online diagnostics on linux
US6571355B1 (en) 1999-12-29 2003-05-27 Emc Corporation Fibre channel data storage system fail-over mechanism
US6578158B1 (en) 1999-10-28 2003-06-10 International Business Machines Corporation Method and apparatus for providing a raid controller having transparent failover and failback
US20030126315A1 (en) 2001-12-28 2003-07-03 Choon-Seng Tan Data storage network with host transparent failover controlled by host bus adapter
US6601187B1 (en) 2000-03-31 2003-07-29 Hewlett-Packard Development Company, L. P. System for data replication using redundant pairs of storage controllers, fibre channel fabrics and links therebetween
US6671776B1 (en) * 1999-10-28 2003-12-30 Lsi Logic Corporation Method and system for determining and displaying the topology of a storage array network having multiple hosts and computer readable medium for generating the topology
US6732117B1 (en) * 2001-02-27 2004-05-04 Emc Corporation Techniques for handling client-oriented requests within a data storage system
US6745207B2 (en) 2000-06-02 2004-06-01 Hewlett-Packard Development Company, L.P. System and method for managing virtual storage
US20040148380A1 (en) * 2002-10-28 2004-07-29 Richard Meyer Method and system for dynamic expansion and contraction of nodes in a storage area network
US20040153863A1 (en) 2002-09-16 2004-08-05 Finisar Corporation Network analysis omniscent loop state machine
US6775230B1 (en) 2000-07-18 2004-08-10 Hitachi, Ltd. Apparatus and method for transmitting frames via a switch in a storage area network
US20050071837A1 (en) 2003-09-29 2005-03-31 International Business Machines Corporation Automated control of a licensed internal code update on a storage controller
US6892203B2 (en) * 2001-09-07 2005-05-10 Hitachi, Ltd. Method, apparatus and system for remote file sharing
US6944133B2 (en) * 2001-05-01 2005-09-13 Ge Financial Assurance Holdings, Inc. System and method for providing access to resources using a fabric switch
US6952734B1 (en) 2000-08-21 2005-10-04 Hewlett-Packard Development Company, L.P. Method for recovery of paths between storage area network nodes with probationary period and desperation repair
US7010528B2 (en) * 2002-05-23 2006-03-07 International Business Machines Corporation Mechanism for running parallel application programs on metadata controller nodes
US20060072459A1 (en) 2004-10-05 2006-04-06 Knight Frederick E Advertising port state changes in a network
US20060146698A1 (en) 2005-01-04 2006-07-06 Emulex Design & Manufacturing Corporation Monitoring detection and removal of malfunctioning devices from an arbitrated loop
US20060149913A1 (en) 2004-12-30 2006-07-06 Rothman Michael A Reducing memory fragmentation
US20060174000A1 (en) 2005-01-31 2006-08-03 David Andrew Graves Method and apparatus for automatic verification of a network access control construct for a network switch
US20060236059A1 (en) 2005-04-15 2006-10-19 International Business Machines Corporation System and method of allocating contiguous memory in a data processing system
US20060242363A1 (en) 2003-09-29 2006-10-26 Keishi Tamura Storage system and storage controller
US7159094B1 (en) 2004-06-30 2007-01-02 Sun Microsystems, Inc. Kernel memory defragmentation method and apparatus
US20070005820A1 (en) 2005-07-01 2007-01-04 International Business Machines Corporation System, detecting method and program
US7216148B2 (en) * 2001-07-27 2007-05-08 Hitachi, Ltd. Storage system having a plurality of controllers
US7269646B2 (en) * 2003-12-03 2007-09-11 Hitachi, Ltd. Method for coupling storage devices of cluster storage

Patent Citations (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5276877A (en) 1990-10-17 1994-01-04 Friedrich Karl S Dynamic computer system performance modeling interface
US5465337A (en) 1992-08-13 1995-11-07 Sun Microsystems, Inc. Method and apparatus for a memory management unit supporting multiple page sizes
EP0706130A1 (en) 1994-10-07 1996-04-10 International Business Machines Corporation Contiguous memory allocation process
US5768623A (en) * 1995-09-19 1998-06-16 International Business Machines Corporation System and method for sharing multiple storage arrays by dedicating adapters as primary controller and secondary controller for arrays reside in different host computers
US6157963A (en) * 1998-03-24 2000-12-05 Lsi Logic Corp. System controller with plurality of memory queues for prioritized scheduling of I/O requests from priority assigned clients
US6061709A (en) 1998-07-31 2000-05-09 Integrated Systems Design Center, Inc. Integrated hardware and software task control executive
US6219753B1 (en) 1999-06-04 2001-04-17 International Business Machines Corporation Fiber channel topological structure and method including structure and method for raid devices and controllers
US6671776B1 (en) * 1999-10-28 2003-12-30 Lsi Logic Corporation Method and system for determining and displaying the topology of a storage array network having multiple hosts and computer readable medium for generating the topology
US6578158B1 (en) 1999-10-28 2003-06-10 International Business Machines Corporation Method and apparatus for providing a raid controller having transparent failover and failback
US6571355B1 (en) 1999-12-29 2003-05-27 Emc Corporation Fibre channel data storage system fail-over mechanism
US6601187B1 (en) 2000-03-31 2003-07-29 Hewlett-Packard Development Company, L. P. System for data replication using redundant pairs of storage controllers, fibre channel fabrics and links therebetween
US6745207B2 (en) 2000-06-02 2004-06-01 Hewlett-Packard Development Company, L.P. System and method for managing virtual storage
US6775230B1 (en) 2000-07-18 2004-08-10 Hitachi, Ltd. Apparatus and method for transmitting frames via a switch in a storage area network
US6952734B1 (en) 2000-08-21 2005-10-04 Hewlett-Packard Development Company, L.P. Method for recovery of paths between storage area network nodes with probationary period and desperation repair
US6732117B1 (en) * 2001-02-27 2004-05-04 Emc Corporation Techniques for handling client-oriented requests within a data storage system
US6944133B2 (en) * 2001-05-01 2005-09-13 Ge Financial Assurance Holdings, Inc. System and method for providing access to resources using a fabric switch
US7216148B2 (en) * 2001-07-27 2007-05-08 Hitachi, Ltd. Storage system having a plurality of controllers
US20030046606A1 (en) 2001-08-30 2003-03-06 International Business Machines Corporation Method for supporting user level online diagnostics on linux
US6892203B2 (en) * 2001-09-07 2005-05-10 Hitachi, Ltd. Method, apparatus and system for remote file sharing
US20030126315A1 (en) 2001-12-28 2003-07-03 Choon-Seng Tan Data storage network with host transparent failover controlled by host bus adapter
US7010528B2 (en) * 2002-05-23 2006-03-07 International Business Machines Corporation Mechanism for running parallel application programs on metadata controller nodes
US20040153863A1 (en) 2002-09-16 2004-08-05 Finisar Corporation Network analysis omniscent loop state machine
US20040148380A1 (en) * 2002-10-28 2004-07-29 Richard Meyer Method and system for dynamic expansion and contraction of nodes in a storage area network
US20050071837A1 (en) 2003-09-29 2005-03-31 International Business Machines Corporation Automated control of a licensed internal code update on a storage controller
US20060242363A1 (en) 2003-09-29 2006-10-26 Keishi Tamura Storage system and storage controller
US7269646B2 (en) * 2003-12-03 2007-09-11 Hitachi, Ltd. Method for coupling storage devices of cluster storage
US7159094B1 (en) 2004-06-30 2007-01-02 Sun Microsystems, Inc. Kernel memory defragmentation method and apparatus
US20060072459A1 (en) 2004-10-05 2006-04-06 Knight Frederick E Advertising port state changes in a network
US20060149913A1 (en) 2004-12-30 2006-07-06 Rothman Michael A Reducing memory fragmentation
US20060146698A1 (en) 2005-01-04 2006-07-06 Emulex Design & Manufacturing Corporation Monitoring detection and removal of malfunctioning devices from an arbitrated loop
US20060174000A1 (en) 2005-01-31 2006-08-03 David Andrew Graves Method and apparatus for automatic verification of a network access control construct for a network switch
US20060236059A1 (en) 2005-04-15 2006-10-19 International Business Machines Corporation System and method of allocating contiguous memory in a data processing system
US20070005820A1 (en) 2005-07-01 2007-01-04 International Business Machines Corporation System, detecting method and program

Non-Patent Citations (20)

* Cited by examiner, † Cited by third party
Title
"Disk Thrashing" Jul. 3, 2003. Retrieved from http://www.webopedia.com/TERM/d/disk-trashing.htm.
"Disk Thrashing" Jul. 3, 2003. Retrieved from http://www.webopedia.com/TERM/d/disk—trashing.htm.
http://en.wikipedia.org/wiki/Storage-Area-Network, 2009.
http://en.wikipedia.org/wiki/Storage—Area—Network, 2009.
Office Action, mailed Aug. 26, 2005, U.S. Appl. No. 10/430,487.
Office Action, mailed Feb. 10, 2005, U.S. Appl. No. 10/183,946.
Office Action, mailed Feb. 4, 2005, U.S. Appl. No. 10/183,950.
Office Action, mailed Jan. 21, 2005, U.S. Appl. No. 10/183,967.
Office Action, mailed Jul. 1, 2005, U.S. Appl. No. 10/184,058.
Office Action, mailed Jul. 30, 2009, U.S. Appl. No. 11/731,496.
Office Action, mailed Jun. 27, 2005, U.S. Appl. No. 10/184,059.
Office Action, mailed Mar. 14, 2005, U.S. Appl. No. 10/183,947.
Office Action, mailed Mar. 2, 2006, U.S. Appl. No. 10/434,489.
Office Action, mailed May 2, 2004, U.S. Appl. No. 10/183,979.
Office Action, mailed Nov. 20, 2006, U.S. Appl. No. 10/434,489.
Office Action, mailed Sep. 12, 2005, U.S. Appl. No. 10/183,947.
Office Action, mailed Sep. 16, 2005, U.S. Appl. No. 10/183,950.
Office Action, mailed Sep. 8, 2005, U.S. Appl. No. 10/183,949.
Sicola, Steve. ‘SCSI-3 Fault Tolerant Controller Configuraitons utilizing SCC & New Event Codes,’ High Availability Study Group, Document No. X3T10 95-312r3, Rev 3.0, Feb. 1996, pp. 1-14.
Sicola, Steve. 'SCSI-3 Fault Tolerant Controller Configuraitons utilizing SCC & New Event Codes,' High Availability Study Group, Document No. X3T10 95-312r3, Rev 3.0, Feb. 1996, pp. 1-14.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9407516B2 (en) 2011-01-10 2016-08-02 Storone Ltd. Large scale storage system
US9729666B2 (en) 2011-01-10 2017-08-08 Storone Ltd. Large scale storage system and method of operating thereof
US9448900B2 (en) 2012-06-25 2016-09-20 Storone Ltd. System and method for datacenters disaster recovery
US9697091B2 (en) 2012-06-25 2017-07-04 Storone Ltd. System and method for datacenters disaster recovery
US9612851B2 (en) 2013-03-21 2017-04-04 Storone Ltd. Deploying data-path-related plug-ins
US10169021B2 (en) 2013-03-21 2019-01-01 Storone Ltd. System and method for deploying a data-path-related plug-in for a logical storage entity of a storage system

Also Published As

Publication number Publication date
US20050234916A1 (en) 2005-10-20

Similar Documents

Publication Publication Date Title
US7702757B2 (en) Method, apparatus and program storage device for providing control to a networked storage architecture
US12056025B2 (en) Updating the membership of a pod after detecting a change to a set of storage systems that are synchronously replicating a dataset
US7191357B2 (en) Hybrid quorum/primary-backup fault-tolerance model
US7036039B2 (en) Distributing manager failure-induced workload through the use of a manager-naming scheme
US6880052B2 (en) Storage area network, data replication and storage controller, and method for replicating data using virtualized volumes
US7392425B1 (en) Mirror split brain avoidance
US6947981B2 (en) Flexible data replication mechanism
US7542987B2 (en) Automatic site failover
US8312236B2 (en) Apparatus and program storage device for providing triad copy of storage data
US20050188248A1 (en) Scalable storage architecture
EP3745269B1 (en) Hierarchical fault tolerance in system storage
JP2002229837A (en) Method for controlling access to data in shared disc parallel data file
US20080320051A1 (en) File-sharing system and method of using file-sharing system to generate single logical directory structure
US10572188B2 (en) Server-embedded distributed storage system
US8117493B1 (en) Fast recovery in data mirroring techniques
Vallath Oracle real application clusters
JP2024506524A (en) Publication file system and method
Tate et al. Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8. 2.1
Bolinches et al. IBM elastic storage server implementation guide for version 5.3
US20060168228A1 (en) System and method for maintaining data integrity in a cluster network
Austin et al. Oracle Clusterware and RAC Administration and Deployment Guide, 10g Release 2 (10.2) B14197-02
Austin et al. Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide, 10g Release 2 (10.2) B14197-10
Austin et al. Oracle® Clusterware and Oracle RAC Administration and Deployment Guide, 10g Release 2 (10.2) B14197-07
Hussain et al. Overview of Oracle RAC: by Kai Yu
Shaw et al. RAC Concepts

Legal Events

Date Code Title Description
AS Assignment

Owner name: XIOTECH CORPORATION, MINNESOTA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BERGMAN, LYLE;EBSEN, DAVE;RYSAVY, RANDY;AND OTHERS;REEL/FRAME:015196/0704;SIGNING DATES FROM 20040330 TO 20040331

Owner name: XIOTECH CORPORATION,MINNESOTA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BERGMAN, LYLE;EBSEN, DAVE;RYSAVY, RANDY;AND OTHERS;SIGNING DATES FROM 20040330 TO 20040331;REEL/FRAME:015196/0704

AS Assignment

Owner name: SILICON VALLEY BANK,CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:XIOTECH CORPORATION;REEL/FRAME:017586/0070

Effective date: 20060222

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:XIOTECH CORPORATION;REEL/FRAME:017586/0070

Effective date: 20060222

AS Assignment

Owner name: HORIZON TECHNOLOGY FUNDING COMPANY V LLC, CONNECTI

Free format text: SECURITY AGREEMENT;ASSIGNOR:XIOTECH CORPORATION;REEL/FRAME:020061/0847

Effective date: 20071102

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:XIOTECH CORPORATION;REEL/FRAME:020061/0847

Effective date: 20071102

Owner name: HORIZON TECHNOLOGY FUNDING COMPANY V LLC,CONNECTIC

Free format text: SECURITY AGREEMENT;ASSIGNOR:XIOTECH CORPORATION;REEL/FRAME:020061/0847

Effective date: 20071102

Owner name: SILICON VALLEY BANK,CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:XIOTECH CORPORATION;REEL/FRAME:020061/0847

Effective date: 20071102

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Free format text: PAT HOLDER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: LTOS); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

AS Assignment

Owner name: XIOTECH CORPORATION, COLORADO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:HORIZON TECHNOLOGY FUNDING COMPANY V LLC;REEL/FRAME:044883/0095

Effective date: 20171214

Owner name: XIOTECH CORPORATION, COLORADO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:044891/0322

Effective date: 20171214

FEPP Fee payment procedure

Free format text: 7.5 YR SURCHARGE - LATE PMT W/IN 6 MO, SMALL ENTITY (ORIGINAL EVENT CODE: M2555)

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220420