US20040243650A1 - Shared nothing virtual cluster - Google Patents

Shared nothing virtual cluster Download PDF

Info

Publication number
US20040243650A1
US20040243650A1 US10/858,295 US85829504A US2004243650A1 US 20040243650 A1 US20040243650 A1 US 20040243650A1 US 85829504 A US85829504 A US 85829504A US 2004243650 A1 US2004243650 A1 US 2004243650A1
Authority
US
United States
Prior art keywords
virtual
cluster
server
drive
drives
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/858,295
Other versions
US7287186B2 (en
Inventor
Dave McCrory
Robert Hirschfeld
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quest Software Inc
Original Assignee
Surgient Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Surgient Inc filed Critical Surgient Inc
Priority to US10/858,295 priority Critical patent/US7287186B2/en
Assigned to SURGIENT, INC. reassignment SURGIENT, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HIRSCHFELD, ROBERT A., MCCRORY, DAVE D.
Publication of US20040243650A1 publication Critical patent/US20040243650A1/en
Application granted granted Critical
Publication of US7287186B2 publication Critical patent/US7287186B2/en
Assigned to SQUARE 1 BANK reassignment SQUARE 1 BANK SECURITY AGREEMENT Assignors: SURGIENT, INC.
Assigned to ESCALATE CAPITAL I, L.P., A DELAWARE LIMITED PARTNERSHIP reassignment ESCALATE CAPITAL I, L.P., A DELAWARE LIMITED PARTNERSHIP INTELLECTUAL PROPERTY SECURITY AGREEMENT TO THAT CERTAIN LOAN AGREEMENT Assignors: SURGIENT, INC., A DELAWARE CORPORATION
Assigned to QUEST SOFTWARE, INC. reassignment QUEST SOFTWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SURGIENT, INC.
Assigned to WELLS FARGO CAPITAL FINANCE, LLC, AS AGENT reassignment WELLS FARGO CAPITAL FINANCE, LLC, AS AGENT AMENDMENT NUMBER SIX TO PATENT SECURITY AGREEMENT Assignors: AELITA SOFTWARE CORPORATION, NETPRO COMPUTING, INC., QUEST SOFTWARE, INC., SCRIPTLOGIC CORPORATION, VIZIONCORE, INC.
Assigned to NETPRO COMPUTING, INC., QUEST SOFTWARE, INC., AELITA SOFTWARE CORPORATION, SCRIPTLOGIC CORPORATION, VIZIONCORE, INC. reassignment NETPRO COMPUTING, INC. RELEASE OF SECURITY INTEREST IN PATENT COLLATERAL Assignors: WELLS FARGO CAPITAL FINANCE, LLC (FORMERLY KNOWN AS WELLS FARGO FOOTHILL, LLC)
Assigned to DELL SOFTWARE INC. reassignment DELL SOFTWARE INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: QUEST SOFTWARE, INC.
Assigned to SURGIENT, INC. reassignment SURGIENT, INC. RELEASE BY SECURED PARTY, EFFECTIVE 08/10/2010 Assignors: SQUARE 1 BANK
Assigned to SURGIENT, INC. reassignment SURGIENT, INC. RELEASE BY SECURED PARTY, EFFECTIVE 08/09/2010 Assignors: ESCALATE CAPITAL I, L.P.
Assigned to BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT reassignment BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT PATENT SECURITY AGREEMENT (NOTES) Assignors: APPASSURE SOFTWARE, INC., ASAP SOFTWARE EXPRESS, INC., BOOMI, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL USA L.P., FORCE10 NETWORKS, INC., GALE TECHNOLOGIES, INC., PEROT SYSTEMS CORPORATION, SECUREWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT reassignment BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT PATENT SECURITY AGREEMENT (ABL) Assignors: APPASSURE SOFTWARE, INC., ASAP SOFTWARE EXPRESS, INC., BOOMI, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL USA L.P., FORCE10 NETWORKS, INC., GALE TECHNOLOGIES, INC., PEROT SYSTEMS CORPORATION, SECUREWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT (TERM LOAN) Assignors: APPASSURE SOFTWARE, INC., ASAP SOFTWARE EXPRESS, INC., BOOMI, INC., COMPELLENT TECHNOLOGIES, INC., CREDANT TECHNOLOGIES, INC., DELL INC., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL USA L.P., FORCE10 NETWORKS, INC., GALE TECHNOLOGIES, INC., PEROT SYSTEMS CORPORATION, SECUREWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to DELL INC., SECUREWORKS, INC., ASAP SOFTWARE EXPRESS, INC., COMPELLANT TECHNOLOGIES, INC., DELL PRODUCTS L.P., APPASSURE SOFTWARE, INC., CREDANT TECHNOLOGIES, INC., DELL SOFTWARE INC., DELL USA L.P., PEROT SYSTEMS CORPORATION, WYSE TECHNOLOGY L.L.C., DELL MARKETING L.P., FORCE10 NETWORKS, INC. reassignment DELL INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY AGREEMENT Assignors: AVENTAIL LLC, DELL PRODUCTS L.P., DELL SOFTWARE INC.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: AVENTAIL LLC, DELL PRODUCTS, L.P., DELL SOFTWARE INC.
Assigned to FORCE10 NETWORKS, INC., SECUREWORKS, INC., DELL INC., DELL SOFTWARE INC., COMPELLENT TECHNOLOGIES, INC., ASAP SOFTWARE EXPRESS, INC., DELL USA L.P., PEROT SYSTEMS CORPORATION, CREDANT TECHNOLOGIES, INC., DELL PRODUCTS L.P., WYSE TECHNOLOGY L.L.C., DELL MARKETING L.P., APPASSURE SOFTWARE, INC. reassignment FORCE10 NETWORKS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to COMPELLENT TECHNOLOGIES, INC., DELL SOFTWARE INC., DELL MARKETING L.P., SECUREWORKS, INC., CREDANT TECHNOLOGIES, INC., ASAP SOFTWARE EXPRESS, INC., WYSE TECHNOLOGY L.L.C., PEROT SYSTEMS CORPORATION, DELL USA L.P., DELL INC., DELL PRODUCTS L.P., APPASSURE SOFTWARE, INC., FORCE10 NETWORKS, INC. reassignment COMPELLENT TECHNOLOGIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT
Assigned to DELL PRODUCTS L.P., AVENTAIL LLC, DELL SOFTWARE INC. reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.
Assigned to AVENTAIL LLC, DELL SOFTWARE INC., DELL PRODUCTS, L.P. reassignment AVENTAIL LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to QUEST SOFTWARE INC. reassignment QUEST SOFTWARE INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: DELL SOFTWARE INC.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT FIRST LIEN PATENT SECURITY AGREEMENT Assignors: QUEST SOFTWARE INC.
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECOND LIEN PATENT SECURITY AGREEMENT Assignors: QUEST SOFTWARE INC.
Assigned to GOLDMAN SACHS BANK USA reassignment GOLDMAN SACHS BANK USA FIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ANALYTIX DATA SERVICES INC., BINARYTREE.COM LLC, erwin, Inc., One Identity LLC, ONE IDENTITY SOFTWARE INTERNATIONAL DESIGNATED ACTIVITY COMPANY, OneLogin, Inc., QUEST SOFTWARE INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. reassignment MORGAN STANLEY SENIOR FUNDING, INC. SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT Assignors: ANALYTIX DATA SERVICES INC., BINARYTREE.COM LLC, erwin, Inc., One Identity LLC, ONE IDENTITY SOFTWARE INTERNATIONAL DESIGNATED ACTIVITY COMPANY, OneLogin, Inc., QUEST SOFTWARE INC.
Assigned to QUEST SOFTWARE INC. reassignment QUEST SOFTWARE INC. RELEASE OF SECOND LIEN SECURITY INTEREST IN PATENTS Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT
Assigned to QUEST SOFTWARE INC. reassignment QUEST SOFTWARE INC. RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2033Failover techniques switching over of hardware resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Definitions

  • the present invention relates to clustering and virtualization technology, and more particularly to a shared-nothing virtual cluster formation that eliminates shared hardware.
  • a “physical” device is a material resource, such as, for example, a server, a network switch, memory devices, a disk drive, etc. Even though physical devices are discrete resources, they are not inherently unique. For example, random access memory (RAM) devices and a central processing unit (CPU) in a physical server may be interchangeable between like physical devices. Also, network switches may be easily exchanged with minimal impact.
  • a “logical” device is a representation of a physical device to make it unique and distinct from other physical devices. For example, every network interface has a unique media access control (MAC) address. A MAC address is the logical unique identifier of a physical network interface card (NIC).
  • a “traditional” device is a combined logical and physical device in which the logical device provides the entire identity of a physical device.
  • a physical NIC has its MAC address permanently affixed so the physical device is inextricably tied to the logical device.
  • a “virtualized” device breaks the traditional interdependence between physical and logical devices.
  • Virtualization allows logical devices to exist as an abstraction without being directly tied to a specific physical device.
  • Simple virtualization can be achieved using logical names instead of physical identifiers. For example, using an Internet Uniform Resource Locator (URL) instead of a server's MAC address for network identification effectively virtualizes the target server.
  • URL Internet Uniform Resource Locator
  • Complex virtualization separates physical device dependencies from the logical device. For example, a virtualized NIC could have an assigned MAC address that exists independently of the physical resources managing the NIC network traffic.
  • a cluster includes at least two computers or servers communicatively coupled together and to a shared hardware disk drive that stores shared data.
  • Clustering may be implemented as Active/Active (A/A) or Active/Passive (A/P).
  • A/A Active/Active
  • A/P Active/Passive
  • the servers are active and operating to handle respective loads.
  • the shared data (called a “quorum”) stored on the shared drive is constantly shared between the two servers.
  • the other server In the event of failure of either server in a two-server cluster for the A/A configuration, the other server automatically resumes responsibility for the entire load previously shared by both.
  • one of the servers is the “active” node whereas the other server is the “passive” node.
  • the active node handles the entire cluster load while the passive node remains in a standby state or the like.
  • a “failover” occurs in which the passive server is switched or otherwise “promoted” to active mode to resume handling the cluster load. In this manner, the passive server operates as a fail-safe mechanism for the active server.
  • a link is provided between the clustered servers to detect failure and to facilitate a failover event.
  • the link may perform several functions, such as monitoring the status of the active server, detecting failure of the active server and providing failure notification.
  • the link may further operate to coordinate failover, if necessary.
  • the active server is “demoted” to passive status and the passive server is promoted to active status to effectuate the failover.
  • a traditional virtual cluster employs a virtualization layer on top of physical devices.
  • the virtual cluster is similar to the physical cluster in that it includes two computers or servers communicatively coupled together and a link that detects a failover condition.
  • a first virtual server is implemented on a first underlying physical server and a second virtual server is implemented on a second physical server.
  • a communication link is provided between the virtual servers and with a shared virtual drive implemented on a third physical device.
  • the shared virtual drive stores shared data for the virtual servers of the virtual cluster.
  • Operation of the virtual cluster is similar to its physical counterpart.
  • the passive virtual server in the event of failure of the first physical server, the passive virtual server is promoted to active mode to resume handling of the cluster load.
  • a drive image or the like describing or otherwise defining characteristics of the virtual server is stored on the virtual drive, and is employed to create or otherwise power up the virtual server during the failover event.
  • the traditional cluster configurations have several disadvantages. Both servers require access to the shared drive, which results in an expensive and complicated structure.
  • the traditional clusters have physical limitations in that the physical separation between the physical servers and physical devices storing the shared data is a limiting factor. Also, the additional shared hardware for the shared drive results in an additional point of failure.
  • the shared data configuration is expensive and requires special software and configuration.
  • the operating system (OS) of both servers must support clustering.
  • Application software may need to support clustering.
  • the virtual configuration the virtualization software must support clustering or otherwise be configured to handle the clustered structure.
  • the shared drive (physical or virtual) must be pre-configured to support clustering.
  • the clustering portions, including the servers and drives, are not easily replaced or substituted.
  • a shared-nothing virtual cluster includes a plurality of virtual servers located on a corresponding plurality of physical servers linked together via a network, where the virtual servers collectively form an active/passive (A/P) cluster including an active virtual server and at least one passive server.
  • the shared-nothing virtual cluster further includes an interlink and a plurality of virtual drives located on the physical servers.
  • the active virtual server handles a cluster load and executes a first operating system (OS) that operates the virtual drives in a data redundant configuration that collectively stores a data set for the cluster.
  • Each passive virtual server is coupled to a sufficient number of the virtual drives with redundant information to recover the data set for the cluster.
  • the interlink is operatively configured to detect failure of the active virtual server and to initiate promotion of a passive virtual server to active status to resume handling the cluster load after failover.
  • the first OS maintains first and second virtual drives in a mirrored configuration.
  • the virtual drives may include at least three drives in the data redundant configuration, such as, for example, a RAID level 5 configuration.
  • the passive virtual server may be an inactive instance of the active virtual server and include a second OS that is configured, when the passive virtual server is activated, to operate the second virtual drive as its primary drive storing the data set.
  • the inactive instance may further be configured to use a replacement drive to complete the data redundant configuration, such as to replace a virtual drive located on a failed physical server.
  • each virtual drive may include a virtual static drive and one or more virtual differential drives.
  • the network may be an inter-network so that the physical servers may be physically located at geographically remote locations.
  • the interlink includes a status monitor, a load monitor and a cluster manager.
  • the status monitor detects failure of the active virtual server.
  • the load monitor monitors relative load level of each of the physical servers.
  • the cluster manager configures and maintains the cluster, manages failover, and selects a passive server for promotion during failover.
  • the cluster manager may further be configured to monitor load information from the load manager to ensure adequate resources before and after the failover.
  • a virtual cluster includes a first virtual server located on a first of a plurality of physical servers, where the first virtual server is initially active and handling a cluster load.
  • Cluster data is stored in a plurality of virtual drives organized in a data redundant configuration.
  • the virtual cluster includes a disk image stored on the virtual drives and an interlink.
  • the disk image incorporates attributes of the first virtual server.
  • the interlink is operative to monitor the first virtual server and to initiate promotion of a second virtual server on a second physical server to active status using the disk image in the event of failure of the first virtual server to effectuate failover.
  • the second virtual server when activated, resumes handling of the cluster load and accesses the cluster data.
  • the virtual drives may be configured in any suitable data redundant configuration, such as a mirrored configuration or a configuration including three or more drives. Other alternative embodiments are contemplated, such as similar to those previously described.
  • a method of configuring and operating a shared-nothing virtual cluster includes operating an active virtual server on a first one of a plurality of physical servers coupled together via a network to handle a cluster load, storing, by the active virtual server, cluster data onto a plurality of virtual drives located on the physical drives and organized in a data redundant configuration, detecting failure of the first physical server, and in the event of failure of the first physical server, activating a second virtual server on a second physical server to resume handling of the cluster load, and providing access by the activated second virtual server to a sufficient number of the virtual drives collectively storing the cluster data.
  • the activating of a second virtual server may include activating an inactive instance of the active virtual server.
  • the method may include storing a disk image including attributes of the active virtual server, and retrieving and using the disk image to activate the second virtual server.
  • the method may include monitoring relative load of each of the physical servers and providing load information.
  • the method may include selecting the first physical server to initially handle the cluster load based on the load information, and selecting the second physical server to resume handling of the cluster load based on the load information in the event of failure of the first physical server.
  • Other alternative embodiments are contemplated, such as similar to those previously described.
  • FIG. 1 is a simplified block diagram of a traditional cluster employing physical devices
  • FIG. 2 is a simplified block diagram of a traditional virtual cluster that employs a virtualization layer on top of physical devices
  • FIG. 3 is a block diagram of a shared-nothing A/P cluster implemented according to an exemplary embodiment of the present invention
  • FIG. 4 is a block diagram of a shared-nothing cluster implemented according to another exemplary embodiment of the present invention employing a RAID level configuration
  • FIG. 5 is a block diagram of a cluster similar to the cluster of FIG. 3 except including a differential drive for each virtual drive;
  • FIG. 6 is a block diagram of a cluster similar to the cluster of FIG. 3 except communicatively coupled via an inter-network;
  • FIG. 7 is a block diagram of a cluster similar to the cluster of FIG. 3 except extended to include additional physical servers to form a cluster ring or cluster chain;
  • FIG. 8 is a simplified block diagram illustrating an exemplary embodiment of an interlink configured as a service and with expanded capabilities.
  • FIG. 1 is a simplified block diagram of a traditional cluster 100 employing physical devices.
  • the cluster 100 includes two computers or servers 102 and 104 coupled together via a network 106 , which is further coupled to a shared hardware disk drive 108 that stores shared data.
  • the shared data stored on the shared drive 108 is constantly shared between the two servers.
  • the shared physical drive 108 may be implemented as a System Area Network (SAN) or the like.
  • clustering may be implemented as Active/Active (A/A) or Active/Passive (A/P), the present invention primarily concerns the A/P configuration.
  • one of the servers such as the server 102
  • the active node whereas the other server 104 is the passive node.
  • the active node handles the entire cluster load while the passive node remains in a standby state or the like.
  • a “failover” occurs in which the passive server 104 is switched or otherwise “promoted” to active mode to resume handling the cluster load. In this manner, the server 104 operates as a fail-safe mechanism for the server 102 .
  • An interlink 110 is provided between the servers 102 , 104 to detect failure and to facilitate a failover event.
  • the interlink 110 monitors the status of the active server 102 , detects failure of the active server 102 and provides failure notification.
  • the interlink 110 may further operate to coordinate failover, if necessary. In the event of failure, the active server 102 is “demoted” to passive status and the passive server 104 is promoted to active status to effectuate the failover.
  • FIG. 2 is a simplified block diagram of a traditional virtual cluster 200 , that employs a virtualization layer on top of physical devices.
  • the virtual cluster 200 is similar to the physical cluster 100 in that it includes two computers or servers 202 and 204 coupled via a network 206 and an interlink 210 that detects a failover condition for the servers 202 and 204 .
  • a virtual server 212 is implemented on the underlying physical server 202 and a virtual server 214 is implemented on the physical server 204 .
  • Virtual extensions of the network 206 establish a communication link between the virtual servers 212 and 214 and with a shared virtual drive 208 implemented on another physical device 216 .
  • the shared virtual drive 208 stores shared data for the virtual servers 212 and 214 .
  • Operation of the cluster 200 is similar to the cluster 100 .
  • the virtual server 212 is the active node
  • the passive virtual server 214 is promoted to active mode to resume handling of the cluster load.
  • a drive image or the like describing or otherwise defining characteristics of the virtual server 212 is stored on the virtual drive 208 , and is employed to create or otherwise power up the virtual server 214 during the failover event.
  • the traditional cluster configurations as exemplified by the clusters 100 and 200 have several disadvantages. Both of the servers ( 102 , 104 or 212 , 214 ) require access to the shared drive ( 108 or 208 ), which results in an expensive and complicated structure.
  • the traditional clusters 100 and 200 have physical limitations in that the physical separation between the physical servers and physical devices storing the shared data is a limiting factor. Also, the additional shared hardware, such as the shared physical drive 108 or the physical device 216 , results in an additional point of failure.
  • the shared data configuration is expensive and requires special software and configuration.
  • the operating system (OS) of both servers must support clustering. Application software may need to support clustering.
  • the virtualization software must support clustering or otherwise be configured to handle the clustered structure.
  • the shared drive (physical 108 and virtual 208 ) must be pre-configured to support clustering.
  • the clustering portions, including the servers and drives, are not easily replaced or substituted.
  • FIG. 3 is a block diagram of a shared-nothing A/P cluster 300 implemented according to an embodiment of the present invention.
  • a virtual server 312 is located on a first physical server 302 and servers as the primary or active server in the A/P cluster configuration for handling the entire cluster load.
  • Another virtual server 314 is located on a second physical server 304 and serves as the secondary or passive server in the A/P cluster configuration.
  • the virtual servers 312 and 314 are communicatively coupled or linked together via a network 306 .
  • the virtual server 314 is passive and initially in a standby or powered-down mode.
  • the virtual server 314 is an inactive instance of the virtual server 314 , as further described below.
  • An interlink 310 is provided between the physical servers 302 and 304 or otherwise between the virtual servers 312 and 314 .
  • the interlink 310 at least operates to detect and provide notice of failure of the virtual server 312 and/or the physical server 302 .
  • a virtual drive 316 is implemented on the physical server 302 and coupled to the active virtual server 312 during normal operation via a local link 315 .
  • a network link 324 is provided between the virtual server 312 and another virtual drive 318 implemented on the physical server 304 .
  • the network links to the virtual drives described herein, including the network link 324 are shown with dashed lines and are implemented across corresponding networks, such as the network 106 .
  • the virtual server 312 executes an operating system (OS) 320 that is configured to use the virtual drive 316 as its primary drive and to operate the virtual drive 318 via the link 324 as a mirrored drive in a mirrored data redundant configuration, such as according to RAID level 0.
  • the OS 320 maintains a copy of data on both of the virtual drives 316 and 318 .
  • the virtual server 314 includes an OS 322 , which is a substantially identical copy of the OS 320 while the virtual server 312 is active.
  • the persistent attributes of the two virtual servers 312 , 314 are the same so that they effectively have the same identity. Examples of persistent attributes include a media access control (MAC) address, a boot disk image, a system identifier, a processor type and access credentials.
  • MAC media access control
  • the persistent attributes may further include semi-persistent attributes including an internet protocol (IP) address, a logical name, a server cloud manager identifier, user account information, a non-boot disk image and network connections.
  • semi-persistent attributes include processor resource information, memory resource information, keyboard/video/mouse (KVM) resources and disk redundancy level.
  • the passive virtual server 314 is activated on physical server 304 to resume handling of the cluster load. Since the persistent attributes of the virtual server 314 are the same as the virtual server 312 , it is effectively identified as the same server which resumes in place of the failed virtual server 314 .
  • the OS 322 when activated, sees and uses the virtual drive 318 as its primary drive via local link 317 rather than the virtual drive 316 .
  • the virtual server 314 may be powered up using a drive image stored on the virtual drives 316 and 318 .
  • the OS 322 operates the virtual drives 318 and 316 in a mirrored configuration, similar to that of the OS 320 , except that the virtual drive 318 is its primary drive whereas the virtual drive 316 is the mirrored drive coupled via another network link 326 .
  • the OS 322 of the newly started virtual server 314 operates the virtual drives 318 and 316 in a similar yet reversed mirrored configuration.
  • the links 324 and 326 form a cross-linked connection between the virtual servers 312 , 314 and the virtual drives 316 , 318 .
  • the configuration change regarding the reversed use of the virtual drives 316 and 318 is made at the virtual software level prior to failover and transparent to the operating systems 320 and 322 .
  • Both of the operating systems 320 and 322 are configured to use the same redundant drive set (e.g., same RAID set) in which “drive 1” is the local drive and “drive 2” is the remote drive.
  • the OS 320 when active, sees the virtual drives 316 and 318 as redundant drive set in which drive 1 is the virtual drive 316 and drive 2 is the virtual drive 318 .
  • the identity and operation of the OS 322 is substantially the same as the OS 320 in which it sees the same redundant drive set including the virtual drives 316 and 318 , except that, since the OS 322 is activated on a different physical server, it sees drive 1 as the virtual drive 318 and drive 2 as the virtual drive 316 .
  • a passive server when activated, it does not see a serious drive failure situation since its primary drive is operable. This is true even if the mirrored drive 316 is temporarily unavailable. This configuration change improves the recovery process since the activated passive server can automatically resume operations as though a failure had not occurred.
  • the links 324 and 326 may be effectively broken upon start up of the virtual server 314 , such as in the event that the physical server 302 was the cause of the failure where the virtual drive 316 is no longer available. Nonetheless, no data is lost and the virtual server 314 may operate uninterrupted while the virtual drive 316 is unavailable or otherwise while the link 326 is broken.
  • the OS 322 of the virtual server 314 automatically re-synchronizes the data on the mirrored virtual drive 316 with the data on the virtual drive 318 . Operation may proceed in this reversed manner in which the virtual server 314 continues as the active server while the virtual server 312 remains as the passive server in standby and/or powered down mode.
  • the active/passive status of the virtual servers 312 , 314 may be swapped again to return to the original configuration if desired.
  • the OS 322 does not support any RAID level or mirroring and the link 326 is removed or otherwise not provided.
  • the virtual server 314 is activated and configured to operate using its local virtual drive 318 as its primary drive without loss of cluster data.
  • the virtual server 314 is suspended or otherwise shut down, and the virtual drive 316 is synchronized with the virtual drive 318 .
  • the virtual server 312 is restarted using the virtual drive 316 as its primary and the virtual drive 318 as its secondary, mirrored drive.
  • a shared drive is not required for the shared-nothing cluster 300 .
  • the data redundant capabilities of the OS 320 e.g., RAID or the like
  • the synchronization of the virtual drives 316 and 318 is transparent and ongoing and fully enables successful failover without loss of data.
  • the original configuration (or otherwise, the equivalent reversed configuration) is quickly re-established in a seamless manner since the data redundant disk operation capabilities automatically synchronize and reconstruct one drive to match another.
  • a significant advantage is that the virtualization software, the operating systems 320 and 322 , and the application software provided on both of the virtual servers 312 , 314 , are not required to support clustering or even be aware that a clustered configuration exists.
  • the virtual drives 316 and 318 do not have to be configured to support clustering or to share data between two servers.
  • Generic application, OS and virtualization software may be employed as long as the OS supports some form of data redundant configuration, such as, for example, RAID level 0 or any other suitable RAID level.
  • Embodiments of the present invention are illustrated using RAID level 0 and 5 configurations, although it is understood that other RAID levels or other data redundant methods now known or newly developed may be used without departing from the spirit and scope of the present invention.
  • the data is automatically maintained by the data redundancy operations, such as the standard RAID disk operation capabilities. Data redundancy across multiple drives ensures the integrity of the data even while a drive is missing or unavailable (e.g., data maintained on one drive while mirrored drive is missing).
  • the cluster 300 may be optimized by optimizing the network 306 to limit traffic with a special network link or the like between the physical servers 302 , 304 .
  • the network 306 is configured as a dedicated network.
  • the network 306 is implemented as a physical crossover.
  • the mirror network link 326 is optional and not required since the virtual server 314 may resume operation after failover using the virtual drive 318 without the virtual drive 316 .
  • the link 326 may be established at a later time or even by a different physical server (not shown).
  • the original physical server 302 after failover, is not required to rebuild the cluster 300 .
  • the OS 322 synchronizes the data between the virtual drive 318 and another similar virtual drive on another physical server.
  • the virtual server 314 is suspended and the OS associated with another virtual server on another physical server synchronizes the data between a different virtual drive and the virtual drive 318 .
  • the other virtual server continues operating as the active node whereas the virtual server 314 once again becomes the passive node.
  • the interlink 310 may be implemented in many different ways varying from simple to complex.
  • the interlink 310 does not need to be implemented with network communication and may be implemented solely on the backup or passive node and provide relatively simple notifications.
  • the interlink 310 may comprise a simple monitor function on the physical server 304 for monitoring the mirrored virtual drive 318 (e.g., file lock) to make the failover decision.
  • the interlink 310 may be more complex and be designed to monitor one or more heartbeat signals from either or both of the physical server 302 and/or the virtual server 312 .
  • the interlink 310 is a service function that detects failure of the active node and coordinates the failover process by promoting the passive node to active and maintain seamless handling of the load.
  • the interlink 310 incorporates management functions that provide load monitoring, load balancing and cluster operation and configuration.
  • the interlink 310 incorporates management functionality to setup, monitor and maintain one or more clusters.
  • the interlink 310 may be designed with any level of intelligence to properly build a cluster configuration and to optimize cluster load and operation over time in response to available resources and dynamic loads levels and re-building in response to node failure and during and after restoration.
  • the interlink 310 performs vital pre-failure roles, such as helping to build the initial cluster configuration correctly and to periodically or continuously monitor loads on passive systems to ensure adequate resources in the event of failure of an active system.
  • the interlink 310 is capable of redistributing cluster elements in a seamless manner to optimize load handling during normal operation (e.g., prior to a failover event) and during and after any failover event(s).
  • the interlink 310 also manages restoration of operation to the original or main system when the failed main node is back online.
  • the interlink 310 ensures appropriate cluster operation, such as by enforcing that only one node (and corresponding virtual server) is active at a time in the A/P cluster configuration.
  • the interlink 310 demotes the failed node in response to a failover event and selects a passive node for promotion to active based on any combination of static considerations (e.g., predetermined passive activation order) and dynamic considerations, such as existing or anticipated loads.
  • each passive server can be implemented as a substantial duplicate of the active server.
  • This is achieved by storing attributes (e.g., persistent attributes) of the active virtual server in a file image that is stored on the virtual drives.
  • the stored drive image is used to create or otherwise power up a passive virtual server to continue handling the cluster load previously being handled by the failed active node.
  • Non-persistent attributes are provided and the local drive is configured to be the primary drive for the activated virtual server.
  • the drive image can be stored on another storage or memory and can be used to create and activate the passive virtual server on the same or even a different physical server.
  • FIG. 4 is a block diagram of a shared-nothing cluster 400 implemented according to another exemplary embodiment of the present invention employing a RAID level 5 configuration.
  • Three physical servers 402 , 404 and 406 are communicatively coupled together via a network 408 .
  • Virtual servers 412 , 414 and 416 are operated on the physical servers 402 , 404 and 406 , respectively.
  • the virtual server 412 is initially the active server handling the entire cluster load whereas the virtual servers 414 and 416 are the passive servers in standby mode or otherwise powered down.
  • both of the virtual servers 414 and 416 are inactive instances of the virtual server 412 .
  • An interlink 410 is provided between the virtual server 412 and the virtual servers 414 , 416 to detect failure of the active virtual server 412 and to active one of the passive virtual servers 414 or 416 .
  • the cluster 400 includes virtual drives 418 , 420 and 422 located on physical servers 402 , 404 and 406 , respectively, and configured to be operated according to a data redundant configuration including three or more drives.
  • An exemplary data redundant configuration is the RAID level 5 configuration.
  • the virtual drives 418 - 422 collectively store the entire cluster data or data set.
  • the active virtual server 412 executes an OS 424 that communicates with the virtual drives 418 , 420 and 422 via links 417 , 423 and 425 , respectively, wherein link 417 is a local link and links 423 and 425 are network links.
  • the virtual servers 414 and 416 include respective OSs 426 and 428 , which are substantially identical to the OS 424 .
  • the selected data redundant configuration e.g., RAID level 5
  • redundancy of data is employed so that any one of the virtual drives 418 - 422 may be removed without loss of data.
  • the information on the missing drive is reconstructed using the combined information of the remaining drives as known to persons having ordinary skill in the art. In this manner, less than all of the virtual drives are necessary to reconstruct the cluster data.
  • Some RAID configurations with N disk drives enable complete data reconstruction with N-1 drives while others may enable complete reconstruction with the loss of 2 or more drives.
  • the interlink 410 selects and activates one of the virtual servers 414 or 416 . Assuming the virtual server 414 is selected, the interlink 410 promotes the virtual server 414 to active, and its OS 426 is configured to use the virtual drive 420 as its primary drive via a local link 419 . The OS 426 accesses the virtual drives 418 and 422 via first and second network links 427 and 429 , respectively.
  • the virtual server 414 continues without interruption during the failover event since it has sufficient information to reconstruct all of the cluster data via the virtual drives 420 and 422 in accordance with the data redundancy configuration operation.
  • the data redundancy operation continues even while a virtual drive is missing or unavailable as long as the remaining drives store sufficient redundant information to recover the data set for the cluster 400 .
  • the OS 426 automatically re-synchronizes the data on the virtual drive 418 with the virtual drives 420 and 422 in accordance with data redundancy operation.
  • the virtual server 414 may continue as the active node, or the virtual server 412 may be promoted (while the virtual server 414 is suspended and demoted) to return to the original configuration.
  • the virtual server 416 includes the OS 428 , a local link 421 to the virtual drive 422 and network links 431 and 433 to access the virtual drives 418 and 420 , respectively.
  • the virtual server 416 may also be selected by the interlink 410 to be promoted to active to replace the failed previously-active node.
  • a configuration change is made regarding use of the virtual drives 418 - 422 at the virtual software level prior to failover and transparent to the operating systems 424 - 426 .
  • Each of the operating systems 424 - 426 is configured to use the same redundant drive set (e.g., same RAID level 5 drive set) in which “drive 1” is the local drive and the remaining drives are configured as remote drives.
  • the OS 424 when active, sees the virtual drives 418 - 422 as a redundant drive set in which drive 1 is the virtual drive 418 .
  • the OS 426 When the OS 426 is activated upon failover, its identity and operation is substantially the same as the OS 424 in which it sees the same redundant drive set including the virtual drives 424 - 426 , except that, since the OS 426 is activated on a different physical server, it sees drive 1 as the virtual drive 420 . If the OS 428 is selected to be activated on failover, its sees the same data redundant drive set in which drive 1 is the virtual drive 422 . The remaining drives in either case are ordered in a compatible manner to maintain integrity of the drive set. In this manner, when a passive virtual server is activated, its OS does not see a serious drive failure situation since its primary drive is operable. This is true even if the data redundant or mirrored drive of the failed server is temporarily unavailable. This configuration change improves the recovery process since the activated passive server can automatically resume operations as though a failure had not occurred.
  • any one of the virtual drives 418 - 422 may be temporarily taken offline without losing data. Further, a different virtual drive may be brought online to take the place of the removed drive, and the operative OS of the active node automatically synchronizes the data on the new drive.
  • the new virtual drive may be located on the same or even a different physical server as the removed virtual drive as long as the appropriate network links are provided.
  • the passive virtual servers 414 and 416 in the cluster 400 may be temporarily removed and/or replaced with similar servers on the same or different physical servers.
  • a new physical server (not shown) may be provided to replace the physical server 404 , where the virtual server 414 and/or virtual drive 420 may be moved to the new physical server or otherwise replaced as well.
  • a separate management function or service (not shown) may be provided and operated on any of the physical servers 402 - 406 and/or virtual servers 412 - 416 or on different physical or virtual servers.
  • the number of physical servers, virtual servers and virtual drives do not have to be equal.
  • An active and passive pair of virtual servers may be supported with three or more virtual drives, such as in a RAID level 5 configuration.
  • any number of virtual servers and any number of virtual drives is contemplated as long as redundant data is appropriately maintained in the drive set to ensure against loss of data, such as provided in many RAID configurations.
  • multiple virtual servers and/or virtual drives may be located on any single physical server, it may be desired to distribute the virtual servers and drives among as many physical servers as are available to ensure against loss of data in the event of failure of any one physical server.
  • the collective set of virtual drives for the cluster may store one or more disk images used to generate one or more of the passive virtual servers. The disk image(s) incorporate(s) attributes of the active virtual server.
  • the interlink 410 incorporates management functionality to setup, monitor and maintain one or more clusters in a similar manner as previously described with respect to the interlink 310 .
  • the interlink 410 may be designed, for example, with any level of intelligence to properly build a cluster configuration and to optimize cluster load and operation over time in response to available resources and dynamic loads levels and re-building in response to node failure and during and after restoration.
  • the interlink 410 selects from among multiple passive node for promotion to active in response to a failover event based on any combination of static considerations (e.g., predetermined passive activation order) and dynamic considerations, such as existing or anticipated loads.
  • the interlink 410 selects to promote the virtual server 416 instead of the virtual server 414 .
  • the interlink 410 selects the virtual server 416 on physical server 406 .
  • FIG. 5 is a block diagram of a cluster 500 similar to the cluster 300 except including a differential drive for each virtual drive. Similar components include identical reference numbers, where physical servers 302 , 304 , the network 306 , the interlink 310 , and the virtual servers 312 and 314 with operating systems 320 and 322 , respectively, are included.
  • the virtual drives 316 and 318 are replaced with static virtual drives 502 and 506 , respectively, which may represent “snapshots” of the virtual drives 316 and 318 at a specific time. Changes for the virtual drive 502 are stored in a differential drive 504 rather than being immediately incorporated within the virtual drive 502 .
  • changes for the virtual drive 506 are stored in a differential drive 508 rather than being immediately incorporated within the virtual drive 506 .
  • the static virtual drive 502 and its differential drive 504 replace the virtual drive 316 and the static virtual drive 506 and its differential drive 508 replace the virtual drive 318 .
  • the network link 324 is replaced with a network link 510 between the virtual server 312 and the differential drive 508 and the network link 326 is replaced with a network link 512 between the virtual server 314 and the differential drive 504 .
  • only one differential drive is shown for each virtual server, it is understood that a chain of one or more differential drives may be employed.
  • Operation of the cluster 500 is substantially similar to the cluster 300 , except that the differential drives 504 , 508 enable optimized disk input/output (I/O) to increase speed, efficiency and performance.
  • the operating systems 320 , 322 configure and operate the drive pair 502 , 504 in a mirrored configuration with the drive pair 506 , 508 in a similar manner as previously described.
  • the link 512 is optional and the OS 322 need not have access to the virtual drive 502 to maintain the integrity of the data.
  • the static state of the virtual drives 502 and 506 remain intact so that a roll-back may be performed to recapture the original static states of the virtual drives 502 and 506 .
  • differential drives 504 , 508 may be discarded or otherwise stored and re-used to recapture the additional changes to the static states of the virtual drives 502 and 506 , if desired.
  • the use of differential drives for a cluster provides enhanced efficiency, performance and flexibility.
  • the use of differential drives may be extended to other RAID configurations, such as, for example, RAID level 5 configurations.
  • FIG. 6 is a block diagram of a cluster 600 similar to the cluster 300 except communicatively coupled via an inter-network 602 .
  • Similar components include identical reference numbers, where physical servers 302 , 304 , the interlink 310 , the virtual servers 312 and 314 with operating systems 320 and 322 , respectively, are included.
  • the network 306 is replaced with the inter-network 602 illustrating geographically split locations and remote communications.
  • the illustration of the inter-network 602 is not intended to limit the embodiments of the network 306 to a local configuration, but instead to illustrate that the network 306 may be implemented as the inter-network 602 to physically separate the physical servers 302 and 304 as far apart from each other as desired.
  • the inter-network 602 spans any size of geographic area, including multiple networks interconnected together to span large geographic areas, such as wide-area networks (WANs) or the Internet or the like. Operation of the cluster 600 is substantially the same as the cluster 300 .
  • FIG. 7 is a block diagram of a cluster 700 similar to the cluster 300 except extended to include additional physical servers to form a cluster ring or cluster chain. Similar components include identical reference numbers, where physical servers 302 and 304 , the network 306 , and the virtual servers 312 and 314 with operating systems 320 and 322 , respectively, are included.
  • the network 306 is extended to include at least one additional physical server (PS 3 ) 702 , which includes another virtual server 704 and corresponding OS 706 .
  • the virtual server 704 is linked to a local virtual drive 708 via local link 710 .
  • the interlink 310 is replaced with interlink 714 , which is provided between the virtual servers 312 and 314 in a similar manner as the interlink 310 and is further interfaced to the virtual server 704 and/or the physical server 702 .
  • Operation of the cluster 700 is similar to the cluster 300 in which the virtual server 312 is the active server and the virtual server 314 is a passive server in an A/P cluster configuration.
  • the virtual drives 316 and 318 are operated in a mirrored configuration in which the virtual drive 318 stores a copy of the cluster data set. In the event of failover, the virtual server 314 is promoted to active status to maintain operation of the cluster 700 without data loss.
  • the virtual server 314 is an inactive instance of the virtual server 312 , and is preconfigured so that the OS 322 , upon activation of the virtual server 314 , sees the virtual drive 318 as its primary drive. Also, in a similar manner as previously described, the virtual server 314 may be preconfigured so that the OS 322 sees the virtual drive 316 as the mirrored drive in the mirrored configuration, even if not immediately available because of failure of the physical drive 302 . Alternatively, in the cluster 700 , a network link 712 is provided between the virtual server 314 and the virtual drive 708 , and the virtual server 314 is preconfigured so that the OS 322 instead sees the virtual drive 708 as the mirrored drive in the mirrored configuration.
  • the virtual drive 708 is a replacement drive for the virtual drive 316 to maintain a mirrored configuration.
  • the OS 322 copies the cluster data set from the virtual drive 318 to the virtual drive 708 and then maintains the virtual drives 318 and 708 in a mirrored configuration.
  • the interlink 714 may maintain the new failover cluster configuration using the virtual server 314 as the active server regardless of whether the virtual server 312 comes back online.
  • the virtual server 704 may also be an inactive instance of the virtual server 312 , so that when the virtual server 314 is made active in response to the failover event, the interlink 714 employs the virtual server 704 as the new passive virtual server.
  • An optional network link 716 is provided so that the OS 706 may maintain the cluster data in a mirrored configuration in response to another failover event in which the virtual server 704 is promoted to active status to replace the previously active virtual server 314 .
  • the interlink 714 is configured to make the failover decisions based on any of the considerations previously described or described further below.
  • the cluster chain configuration of the cluster 700 provides at least one replacement virtual drive to replace a failed drive in the data redundant configuration. Additional physical servers and corresponding virtual drives may be included in the network to operate as replacement virtual drives. Also, the cluster chain configuration applies equally to data redundant configurations including three or more virtual drives. The cluster is preconfigured to replace a failed drive in the cluster with a new drive, and the newly active OS is configured to use the replacement drive to re-establish the data redundant configuration, albeit with a different set of virtual drives.
  • FIG. 8 is a simplified block diagram illustrating an exemplary embodiment of an interlink 800 configured as a service and with expanded capabilities, which may be employed as any of the interlinks 310 , 410 and 714 .
  • the interlink 800 includes a status monitor 802 , a load monitor 804 and a cluster manager 806 interfaced with each other.
  • the status monitor 802 monitors the status of each of the active nodes (physical and virtual servers) of each cluster and provides failure notifications.
  • the load monitor 804 monitors the relative load of the physical servers and provides load information.
  • the cluster manager 806 configures and manages one or more clusters using a plurality of virtual servers implemented on a corresponding plurality of physical servers.
  • the cluster manager 806 configures and maintains each cluster, manages operation of each established cluster, modifies cluster components if and when necessary or desired, and terminates clusters if and when desired.
  • the cluster manager 806 receives a failure notification for a failed active node, selects a passive virtual server for promotion, and manages the failover process to maintain cluster operation without data loss.
  • the cluster manager 806 uses load information from the load monitor 804 to determine which passive server to promote, and periodically or continuously monitors load information to optimize operation of each cluster.
  • the interlink e.g., 310 or 410 or 714
  • the interlink at least has the capability to manage the failover process, such as by selecting and promoting a passive server to active mode to resume handling of the cluster load, to simplify the configuration and operation of the virtual servers and virtual drives in the cluster.
  • the virtual servers, operating systems, application programs, virtual drives and virtual software need not have any cluster capabilities at all or be aware of cluster operation or configuration.

Abstract

A shared-nothing virtual cluster including multiple virtual servers located on a corresponding number of physical servers linked together via a network. The virtual servers collectively form an active/passive (A/P) cluster including an active virtual server and at least one passive server. The shared-nothing virtual cluster further includes an interlink and multiple virtual drives located on the physical servers. The active virtual server handles a cluster load and executes a first operating system that operates the virtual drives in a data redundant configuration that collectively stores a data set for the cluster. Each passive virtual server, when activated, is coupled to a sufficient number of the virtual drives with redundant information to recover the data set for the cluster. The interlink is operatively configured to detect failure of the active server and to initiate promotion of a virtual server to active status to resume handling the cluster load after failover.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 60/474,992 filed on Jun. 02, 2003, which is incorporated by reference herein for all intents and purposes.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0002]
  • The present invention relates to clustering and virtualization technology, and more particularly to a shared-nothing virtual cluster formation that eliminates shared hardware. [0003]
  • 2. Description of the Related Art [0004]
  • The following definitions are provided for this disclosure with the intent of providing a common lexicon. A “physical” device is a material resource, such as, for example, a server, a network switch, memory devices, a disk drive, etc. Even though physical devices are discrete resources, they are not inherently unique. For example, random access memory (RAM) devices and a central processing unit (CPU) in a physical server may be interchangeable between like physical devices. Also, network switches may be easily exchanged with minimal impact. A “logical” device is a representation of a physical device to make it unique and distinct from other physical devices. For example, every network interface has a unique media access control (MAC) address. A MAC address is the logical unique identifier of a physical network interface card (NIC). A “traditional” device is a combined logical and physical device in which the logical device provides the entire identity of a physical device. For example, a physical NIC has its MAC address permanently affixed so the physical device is inextricably tied to the logical device. [0005]
  • A “virtualized” device breaks the traditional interdependence between physical and logical devices. Virtualization allows logical devices to exist as an abstraction without being directly tied to a specific physical device. Simple virtualization can be achieved using logical names instead of physical identifiers. For example, using an Internet Uniform Resource Locator (URL) instead of a server's MAC address for network identification effectively virtualizes the target server. Complex virtualization separates physical device dependencies from the logical device. For example, a virtualized NIC could have an assigned MAC address that exists independently of the physical resources managing the NIC network traffic. [0006]
  • A cluster includes at least two computers or servers communicatively coupled together and to a shared hardware disk drive that stores shared data. Clustering may be implemented as Active/Active (A/A) or Active/Passive (A/P). For the A/A configuration, the servers are active and operating to handle respective loads. The shared data (called a “quorum”) stored on the shared drive is constantly shared between the two servers. In the event of failure of either server in a two-server cluster for the A/A configuration, the other server automatically resumes responsibility for the entire load previously shared by both. For the A/P configuration, one of the servers is the “active” node whereas the other server is the “passive” node. The active node handles the entire cluster load while the passive node remains in a standby state or the like. In the event of failure of the active server in the A/P configuration, a “failover” occurs in which the passive server is switched or otherwise “promoted” to active mode to resume handling the cluster load. In this manner, the passive server operates as a fail-safe mechanism for the active server. [0007]
  • A link is provided between the clustered servers to detect failure and to facilitate a failover event. The link may perform several functions, such as monitoring the status of the active server, detecting failure of the active server and providing failure notification. The link may further operate to coordinate failover, if necessary. In the event of failure, the active server is “demoted” to passive status and the passive server is promoted to active status to effectuate the failover. [0008]
  • A traditional virtual cluster employs a virtualization layer on top of physical devices. The virtual cluster is similar to the physical cluster in that it includes two computers or servers communicatively coupled together and a link that detects a failover condition. In the virtual case, however, a first virtual server is implemented on a first underlying physical server and a second virtual server is implemented on a second physical server. A communication link is provided between the virtual servers and with a shared virtual drive implemented on a third physical device. The shared virtual drive stores shared data for the virtual servers of the virtual cluster. [0009]
  • Operation of the virtual cluster is similar to its physical counterpart. For the A/P configuration in which the first virtual server is the active node, in the event of failure of the first physical server, the passive virtual server is promoted to active mode to resume handling of the cluster load. In this case, a drive image or the like describing or otherwise defining characteristics of the virtual server is stored on the virtual drive, and is employed to create or otherwise power up the virtual server during the failover event. [0010]
  • The traditional cluster configurations have several disadvantages. Both servers require access to the shared drive, which results in an expensive and complicated structure. The traditional clusters have physical limitations in that the physical separation between the physical servers and physical devices storing the shared data is a limiting factor. Also, the additional shared hardware for the shared drive results in an additional point of failure. The shared data configuration is expensive and requires special software and configuration. The operating system (OS) of both servers must support clustering. Application software may need to support clustering. For the virtual configuration, the virtualization software must support clustering or otherwise be configured to handle the clustered structure. The shared drive (physical or virtual) must be pre-configured to support clustering. The clustering portions, including the servers and drives, are not easily replaced or substituted. [0011]
  • There is a need in the industry for a more flexible and less complicated cluster configuration. [0012]
  • SUMMARY OF THE INVENTION
  • A shared-nothing virtual cluster according to an embodiment of the present invention includes a plurality of virtual servers located on a corresponding plurality of physical servers linked together via a network, where the virtual servers collectively form an active/passive (A/P) cluster including an active virtual server and at least one passive server. The shared-nothing virtual cluster further includes an interlink and a plurality of virtual drives located on the physical servers. The active virtual server handles a cluster load and executes a first operating system (OS) that operates the virtual drives in a data redundant configuration that collectively stores a data set for the cluster. Each passive virtual server is coupled to a sufficient number of the virtual drives with redundant information to recover the data set for the cluster. The interlink is operatively configured to detect failure of the active virtual server and to initiate promotion of a passive virtual server to active status to resume handling the cluster load after failover. [0013]
  • In one embodiment, the first OS maintains first and second virtual drives in a mirrored configuration. Alternatively, the virtual drives may include at least three drives in the data redundant configuration, such as, for example, a RAID level 5 configuration. The passive virtual server may be an inactive instance of the active virtual server and include a second OS that is configured, when the passive virtual server is activated, to operate the second virtual drive as its primary drive storing the data set. The inactive instance may further be configured to use a replacement drive to complete the data redundant configuration, such as to replace a virtual drive located on a failed physical server. [0014]
  • In other alternative embodiments, each virtual drive may include a virtual static drive and one or more virtual differential drives. The network may be an inter-network so that the physical servers may be physically located at geographically remote locations. In one embodiment, the interlink includes a status monitor, a load monitor and a cluster manager. The status monitor detects failure of the active virtual server. The load monitor monitors relative load level of each of the physical servers. The cluster manager configures and maintains the cluster, manages failover, and selects a passive server for promotion during failover. The cluster manager may further be configured to monitor load information from the load manager to ensure adequate resources before and after the failover. [0015]
  • A virtual cluster according to an embodiment of the present invention includes a first virtual server located on a first of a plurality of physical servers, where the first virtual server is initially active and handling a cluster load. Cluster data is stored in a plurality of virtual drives organized in a data redundant configuration. The virtual cluster includes a disk image stored on the virtual drives and an interlink. The disk image incorporates attributes of the first virtual server. The interlink is operative to monitor the first virtual server and to initiate promotion of a second virtual server on a second physical server to active status using the disk image in the event of failure of the first virtual server to effectuate failover. The second virtual server, when activated, resumes handling of the cluster load and accesses the cluster data. The virtual drives may be configured in any suitable data redundant configuration, such as a mirrored configuration or a configuration including three or more drives. Other alternative embodiments are contemplated, such as similar to those previously described. [0016]
  • A method of configuring and operating a shared-nothing virtual cluster according to an embodiment of the present invention includes operating an active virtual server on a first one of a plurality of physical servers coupled together via a network to handle a cluster load, storing, by the active virtual server, cluster data onto a plurality of virtual drives located on the physical drives and organized in a data redundant configuration, detecting failure of the first physical server, and in the event of failure of the first physical server, activating a second virtual server on a second physical server to resume handling of the cluster load, and providing access by the activated second virtual server to a sufficient number of the virtual drives collectively storing the cluster data. [0017]
  • The activating of a second virtual server may include activating an inactive instance of the active virtual server. The method may include storing a disk image including attributes of the active virtual server, and retrieving and using the disk image to activate the second virtual server. The method may include monitoring relative load of each of the physical servers and providing load information. The method may include selecting the first physical server to initially handle the cluster load based on the load information, and selecting the second physical server to resume handling of the cluster load based on the load information in the event of failure of the first physical server. Other alternative embodiments are contemplated, such as similar to those previously described.[0018]
  • BRIEF DESCRIPTION OF THE DRAWING(S)
  • The benefits, features, and advantages of the present invention will become better understood with regard to the following description, and accompanying drawing in which: [0019]
  • FIG. 1 is a simplified block diagram of a traditional cluster employing physical devices; [0020]
  • FIG. 2 is a simplified block diagram of a traditional virtual cluster that employs a virtualization layer on top of physical devices; [0021]
  • FIG. 3 is a block diagram of a shared-nothing A/P cluster implemented according to an exemplary embodiment of the present invention; [0022]
  • FIG. 4 is a block diagram of a shared-nothing cluster implemented according to another exemplary embodiment of the present invention employing a RAID level configuration; [0023]
  • FIG. 5 is a block diagram of a cluster similar to the cluster of FIG. 3 except including a differential drive for each virtual drive; [0024]
  • FIG. 6 is a block diagram of a cluster similar to the cluster of FIG. 3 except communicatively coupled via an inter-network; [0025]
  • FIG. 7 is a block diagram of a cluster similar to the cluster of FIG. 3 except extended to include additional physical servers to form a cluster ring or cluster chain; and [0026]
  • FIG. 8 is a simplified block diagram illustrating an exemplary embodiment of an interlink configured as a service and with expanded capabilities.[0027]
  • DETAILED DESCRIPTION
  • The following description is presented to enable one of ordinary skill in the art to make and use the present invention as provided within the context of a particular application and its requirements. Various modifications to the preferred embodiment will, however, be apparent to one skilled in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described herein, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed. [0028]
  • FIG. 1 is a simplified block diagram of a [0029] traditional cluster 100 employing physical devices. The cluster 100 includes two computers or servers 102 and 104 coupled together via a network 106, which is further coupled to a shared hardware disk drive 108 that stores shared data. The shared data stored on the shared drive 108 is constantly shared between the two servers. The shared physical drive 108 may be implemented as a System Area Network (SAN) or the like. Although clustering may be implemented as Active/Active (A/A) or Active/Passive (A/P), the present invention primarily concerns the A/P configuration. For the A/P configuration, one of the servers, such as the server 102, is the active node whereas the other server 104 is the passive node. The active node handles the entire cluster load while the passive node remains in a standby state or the like. In the event of failure of the active server 102 in the A/P configuration, a “failover” occurs in which the passive server 104 is switched or otherwise “promoted” to active mode to resume handling the cluster load. In this manner, the server 104 operates as a fail-safe mechanism for the server 102.
  • An [0030] interlink 110 is provided between the servers 102, 104 to detect failure and to facilitate a failover event. The interlink 110 monitors the status of the active server 102, detects failure of the active server 102 and provides failure notification. The interlink 110 may further operate to coordinate failover, if necessary. In the event of failure, the active server 102 is “demoted” to passive status and the passive server 104 is promoted to active status to effectuate the failover.
  • FIG. 2 is a simplified block diagram of a traditional [0031] virtual cluster 200, that employs a virtualization layer on top of physical devices. The virtual cluster 200 is similar to the physical cluster 100 in that it includes two computers or servers 202 and 204 coupled via a network 206 and an interlink 210 that detects a failover condition for the servers 202 and 204. In this case, however, a virtual server 212 is implemented on the underlying physical server 202 and a virtual server 214 is implemented on the physical server 204. Virtual extensions of the network 206 establish a communication link between the virtual servers 212 and 214 and with a shared virtual drive 208 implemented on another physical device 216. The shared virtual drive 208 stores shared data for the virtual servers 212 and 214.
  • Operation of the [0032] cluster 200 is similar to the cluster 100. For the A/P configuration in which the virtual server 212 is the active node, in the event of failure of the physical server 202 as detected by the interlink 210, the passive virtual server 214 is promoted to active mode to resume handling of the cluster load. In this case, a drive image or the like describing or otherwise defining characteristics of the virtual server 212 is stored on the virtual drive 208, and is employed to create or otherwise power up the virtual server 214 during the failover event.
  • The traditional cluster configurations as exemplified by the [0033] clusters 100 and 200 have several disadvantages. Both of the servers (102, 104 or 212, 214) require access to the shared drive (108 or 208), which results in an expensive and complicated structure. The traditional clusters 100 and 200 have physical limitations in that the physical separation between the physical servers and physical devices storing the shared data is a limiting factor. Also, the additional shared hardware, such as the shared physical drive 108 or the physical device 216, results in an additional point of failure. The shared data configuration is expensive and requires special software and configuration. The operating system (OS) of both servers must support clustering. Application software may need to support clustering. For the virtual configuration, the virtualization software must support clustering or otherwise be configured to handle the clustered structure. The shared drive (physical 108 and virtual 208) must be pre-configured to support clustering. The clustering portions, including the servers and drives, are not easily replaced or substituted.
  • FIG. 3 is a block diagram of a shared-nothing A/[0034] P cluster 300 implemented according to an embodiment of the present invention. A virtual server 312 is located on a first physical server 302 and servers as the primary or active server in the A/P cluster configuration for handling the entire cluster load. Another virtual server 314 is located on a second physical server 304 and serves as the secondary or passive server in the A/P cluster configuration. The virtual servers 312 and 314 are communicatively coupled or linked together via a network 306. The virtual server 314 is passive and initially in a standby or powered-down mode. In one embodiment, the virtual server 314 is an inactive instance of the virtual server 314, as further described below. An interlink 310 is provided between the physical servers 302 and 304 or otherwise between the virtual servers 312 and 314. The interlink 310 at least operates to detect and provide notice of failure of the virtual server 312 and/or the physical server 302. A virtual drive 316 is implemented on the physical server 302 and coupled to the active virtual server 312 during normal operation via a local link 315. A network link 324 is provided between the virtual server 312 and another virtual drive 318 implemented on the physical server 304. The network links to the virtual drives described herein, including the network link 324, are shown with dashed lines and are implemented across corresponding networks, such as the network 106.
  • The [0035] virtual server 312 executes an operating system (OS) 320 that is configured to use the virtual drive 316 as its primary drive and to operate the virtual drive 318 via the link 324 as a mirrored drive in a mirrored data redundant configuration, such as according to RAID level 0. The OS 320 maintains a copy of data on both of the virtual drives 316 and 318. The virtual server 314 includes an OS 322, which is a substantially identical copy of the OS 320 while the virtual server 312 is active. The persistent attributes of the two virtual servers 312, 314 are the same so that they effectively have the same identity. Examples of persistent attributes include a media access control (MAC) address, a boot disk image, a system identifier, a processor type and access credentials. The persistent attributes may further include semi-persistent attributes including an internet protocol (IP) address, a logical name, a server cloud manager identifier, user account information, a non-boot disk image and network connections. Examples of non-persistent attributes include processor resource information, memory resource information, keyboard/video/mouse (KVM) resources and disk redundancy level. There is no conflict between the virtual servers 312 and 314 since only one is active at any given time.
  • In the event of a failure of the [0036] virtual server 312 or the physical server 302 as detected by the interlink 310, the passive virtual server 314 is activated on physical server 304 to resume handling of the cluster load. Since the persistent attributes of the virtual server 314 are the same as the virtual server 312, it is effectively identified as the same server which resumes in place of the failed virtual server 314. The OS 322, when activated, sees and uses the virtual drive 318 as its primary drive via local link 317 rather than the virtual drive 316. The virtual server 314 may be powered up using a drive image stored on the virtual drives 316 and 318. Since the virtual drive 318 contains a duplicate copy of the data set on the virtual drive 316 in accordance with mirrored drive configuration, no cluster data is lost during the failover event. Furthermore, in one embodiment, the OS 322 operates the virtual drives 318 and 316 in a mirrored configuration, similar to that of the OS 320, except that the virtual drive 318 is its primary drive whereas the virtual drive 316 is the mirrored drive coupled via another network link 326. Thus, the OS 322 of the newly started virtual server 314 operates the virtual drives 318 and 316 in a similar yet reversed mirrored configuration. The links 324 and 326 form a cross-linked connection between the virtual servers 312, 314 and the virtual drives 316, 318.
  • In one exemplary embodiment, the configuration change regarding the reversed use of the [0037] virtual drives 316 and 318 is made at the virtual software level prior to failover and transparent to the operating systems 320 and 322. Both of the operating systems 320 and 322 are configured to use the same redundant drive set (e.g., same RAID set) in which “drive 1” is the local drive and “drive 2” is the remote drive. The OS 320, when active, sees the virtual drives 316 and 318 as redundant drive set in which drive 1 is the virtual drive 316 and drive 2 is the virtual drive 318. When the OS 322 is activated upon failover, the identity and operation of the OS 322 is substantially the same as the OS 320 in which it sees the same redundant drive set including the virtual drives 316 and 318, except that, since the OS 322 is activated on a different physical server, it sees drive 1 as the virtual drive 318 and drive 2 as the virtual drive 316. In this manner, when a passive server is activated, it does not see a serious drive failure situation since its primary drive is operable. This is true even if the mirrored drive 316 is temporarily unavailable. This configuration change improves the recovery process since the activated passive server can automatically resume operations as though a failure had not occurred.
  • It is noted that the [0038] links 324 and 326 may be effectively broken upon start up of the virtual server 314, such as in the event that the physical server 302 was the cause of the failure where the virtual drive 316 is no longer available. Nonetheless, no data is lost and the virtual server 314 may operate uninterrupted while the virtual drive 316 is unavailable or otherwise while the link 326 is broken. When the physical server 302 is restarted and the virtual drive 316 is again available, the OS 322 of the virtual server 314 automatically re-synchronizes the data on the mirrored virtual drive 316 with the data on the virtual drive 318. Operation may proceed in this reversed manner in which the virtual server 314 continues as the active server while the virtual server 312 remains as the passive server in standby and/or powered down mode. Alternatively, the active/passive status of the virtual servers 312, 314 may be swapped again to return to the original configuration if desired.
  • In an alternative embodiment, the [0039] OS 322 does not support any RAID level or mirroring and the link 326 is removed or otherwise not provided. In this case, in the event of failover, the virtual server 314 is activated and configured to operate using its local virtual drive 318 as its primary drive without loss of cluster data. When the physical server 302 is available again, the virtual server 314 is suspended or otherwise shut down, and the virtual drive 316 is synchronized with the virtual drive 318. Then, the virtual server 312 is restarted using the virtual drive 316 as its primary and the virtual drive 318 as its secondary, mirrored drive.
  • It is appreciated that a shared drive is not required for the shared-[0040] nothing cluster 300. Instead, the data redundant capabilities of the OS 320 (e.g., RAID or the like) are utilized to synchronize the same cluster data or data set at two different locations on two different physical computers. The synchronization of the virtual drives 316 and 318 is transparent and ongoing and fully enables successful failover without loss of data. The original configuration (or otherwise, the equivalent reversed configuration) is quickly re-established in a seamless manner since the data redundant disk operation capabilities automatically synchronize and reconstruct one drive to match another. A significant advantage is that the virtualization software, the operating systems 320 and 322, and the application software provided on both of the virtual servers 312, 314, are not required to support clustering or even be aware that a clustered configuration exists. The virtual drives 316 and 318 do not have to be configured to support clustering or to share data between two servers. Generic application, OS and virtualization software may be employed as long as the OS supports some form of data redundant configuration, such as, for example, RAID level 0 or any other suitable RAID level. Embodiments of the present invention are illustrated using RAID level 0 and 5 configurations, although it is understood that other RAID levels or other data redundant methods now known or newly developed may be used without departing from the spirit and scope of the present invention. The data is automatically maintained by the data redundancy operations, such as the standard RAID disk operation capabilities. Data redundancy across multiple drives ensures the integrity of the data even while a drive is missing or unavailable (e.g., data maintained on one drive while mirrored drive is missing).
  • The [0041] cluster 300 may be optimized by optimizing the network 306 to limit traffic with a special network link or the like between the physical servers 302, 304. In one embodiment, the network 306 is configured as a dedicated network. Alternatively, the network 306 is implemented as a physical crossover. The mirror network link 326 is optional and not required since the virtual server 314 may resume operation after failover using the virtual drive 318 without the virtual drive 316. Alternatively, the link 326 may be established at a later time or even by a different physical server (not shown). Also, the original physical server 302, after failover, is not required to rebuild the cluster 300. For example, the OS 322 synchronizes the data between the virtual drive 318 and another similar virtual drive on another physical server. Alternatively, the virtual server 314 is suspended and the OS associated with another virtual server on another physical server synchronizes the data between a different virtual drive and the virtual drive 318. In this latter case, the other virtual server continues operating as the active node whereas the virtual server 314 once again becomes the passive node.
  • The [0042] interlink 310 may be implemented in many different ways varying from simple to complex. The interlink 310 does not need to be implemented with network communication and may be implemented solely on the backup or passive node and provide relatively simple notifications. For example, the interlink 310 may comprise a simple monitor function on the physical server 304 for monitoring the mirrored virtual drive 318 (e.g., file lock) to make the failover decision. The interlink 310 may be more complex and be designed to monitor one or more heartbeat signals from either or both of the physical server 302 and/or the virtual server 312. In another embodiment, the interlink 310 is a service function that detects failure of the active node and coordinates the failover process by promoting the passive node to active and maintain seamless handling of the load.
  • In a more complex embodiment, the [0043] interlink 310 incorporates management functions that provide load monitoring, load balancing and cluster operation and configuration. In one embodiment, the interlink 310 incorporates management functionality to setup, monitor and maintain one or more clusters. The interlink 310 may be designed with any level of intelligence to properly build a cluster configuration and to optimize cluster load and operation over time in response to available resources and dynamic loads levels and re-building in response to node failure and during and after restoration. For example, in a more sophisticated embodiment, the interlink 310 performs vital pre-failure roles, such as helping to build the initial cluster configuration correctly and to periodically or continuously monitor loads on passive systems to ensure adequate resources in the event of failure of an active system. The interlink 310 is capable of redistributing cluster elements in a seamless manner to optimize load handling during normal operation (e.g., prior to a failover event) and during and after any failover event(s). The interlink 310 also manages restoration of operation to the original or main system when the failed main node is back online. The interlink 310 ensures appropriate cluster operation, such as by enforcing that only one node (and corresponding virtual server) is active at a time in the A/P cluster configuration. The interlink 310 demotes the failed node in response to a failover event and selects a passive node for promotion to active based on any combination of static considerations (e.g., predetermined passive activation order) and dynamic considerations, such as existing or anticipated loads.
  • The use of virtual servers for implementing the active and passive servers of the cluster nodes enables a significant advantage in that each passive server can be implemented as a substantial duplicate of the active server. This is achieved by storing attributes (e.g., persistent attributes) of the active virtual server in a file image that is stored on the virtual drives. The stored drive image is used to create or otherwise power up a passive virtual server to continue handling the cluster load previously being handled by the failed active node. Non-persistent attributes are provided and the local drive is configured to be the primary drive for the activated virtual server. The drive image can be stored on another storage or memory and can be used to create and activate the passive virtual server on the same or even a different physical server. [0044]
  • FIG. 4 is a block diagram of a shared-[0045] nothing cluster 400 implemented according to another exemplary embodiment of the present invention employing a RAID level 5 configuration. Three physical servers 402, 404 and 406 are communicatively coupled together via a network 408. Virtual servers 412, 414 and 416 are operated on the physical servers 402, 404 and 406, respectively. The virtual server 412 is initially the active server handling the entire cluster load whereas the virtual servers 414 and 416 are the passive servers in standby mode or otherwise powered down. In one embodiment, both of the virtual servers 414 and 416 are inactive instances of the virtual server 412. An interlink 410 is provided between the virtual server 412 and the virtual servers 414, 416 to detect failure of the active virtual server 412 and to active one of the passive virtual servers 414 or 416. The cluster 400 includes virtual drives 418, 420 and 422 located on physical servers 402, 404 and 406, respectively, and configured to be operated according to a data redundant configuration including three or more drives. An exemplary data redundant configuration is the RAID level 5 configuration. Thus, the virtual drives 418-422 collectively store the entire cluster data or data set. The active virtual server 412 executes an OS 424 that communicates with the virtual drives 418, 420 and 422 via links 417, 423 and 425, respectively, wherein link 417 is a local link and links 423 and 425 are network links. The virtual servers 414 and 416 include respective OSs 426 and 428, which are substantially identical to the OS 424. In the selected data redundant configuration (e.g., RAID level 5), redundancy of data is employed so that any one of the virtual drives 418-422 may be removed without loss of data. The information on the missing drive is reconstructed using the combined information of the remaining drives as known to persons having ordinary skill in the art. In this manner, less than all of the virtual drives are necessary to reconstruct the cluster data. Some RAID configurations with N disk drives enable complete data reconstruction with N-1 drives while others may enable complete reconstruction with the loss of 2 or more drives.
  • In the event of failure of the active [0046] virtual server 412, the interlink 410 selects and activates one of the virtual servers 414 or 416. Assuming the virtual server 414 is selected, the interlink 410 promotes the virtual server 414 to active, and its OS 426 is configured to use the virtual drive 420 as its primary drive via a local link 419. The OS 426 accesses the virtual drives 418 and 422 via first and second network links 427 and 429, respectively. If the link 427 is broken or otherwise while the virtual drive 418 is missing or unavailable, such as if the physical server 402 has failed, the virtual server 414 continues without interruption during the failover event since it has sufficient information to reconstruct all of the cluster data via the virtual drives 420 and 422 in accordance with the data redundancy configuration operation. The data redundancy operation continues even while a virtual drive is missing or unavailable as long as the remaining drives store sufficient redundant information to recover the data set for the cluster 400. When the physical server 402 is back online, the OS 426 automatically re-synchronizes the data on the virtual drive 418 with the virtual drives 420 and 422 in accordance with data redundancy operation. The virtual server 414 may continue as the active node, or the virtual server 412 may be promoted (while the virtual server 414 is suspended and demoted) to return to the original configuration. In a similar manner, the virtual server 416 includes the OS 428, a local link 421 to the virtual drive 422 and network links 431 and 433 to access the virtual drives 418 and 420, respectively. In this manner, the virtual server 416 may also be selected by the interlink 410 to be promoted to active to replace the failed previously-active node.
  • In a similar manner as previously described, in one embodiment, a configuration change is made regarding use of the virtual drives [0047] 418-422 at the virtual software level prior to failover and transparent to the operating systems 424-426. Each of the operating systems 424-426 is configured to use the same redundant drive set (e.g., same RAID level 5 drive set) in which “drive 1” is the local drive and the remaining drives are configured as remote drives. The OS 424, when active, sees the virtual drives 418-422 as a redundant drive set in which drive 1 is the virtual drive 418. When the OS 426 is activated upon failover, its identity and operation is substantially the same as the OS 424 in which it sees the same redundant drive set including the virtual drives 424-426, except that, since the OS 426 is activated on a different physical server, it sees drive 1 as the virtual drive 420. If the OS 428 is selected to be activated on failover, its sees the same data redundant drive set in which drive 1 is the virtual drive 422. The remaining drives in either case are ordered in a compatible manner to maintain integrity of the drive set. In this manner, when a passive virtual server is activated, its OS does not see a serious drive failure situation since its primary drive is operable. This is true even if the data redundant or mirrored drive of the failed server is temporarily unavailable. This configuration change improves the recovery process since the activated passive server can automatically resume operations as though a failure had not occurred.
  • It is appreciated that any one of the virtual drives [0048] 418-422 may be temporarily taken offline without losing data. Further, a different virtual drive may be brought online to take the place of the removed drive, and the operative OS of the active node automatically synchronizes the data on the new drive. The new virtual drive may be located on the same or even a different physical server as the removed virtual drive as long as the appropriate network links are provided. It is further appreciated that the passive virtual servers 414 and 416 in the cluster 400 may be temporarily removed and/or replaced with similar servers on the same or different physical servers. For example, a new physical server (not shown) may be provided to replace the physical server 404, where the virtual server 414 and/or virtual drive 420 may be moved to the new physical server or otherwise replaced as well. A separate management function or service (not shown) may be provided and operated on any of the physical servers 402-406 and/or virtual servers 412-416 or on different physical or virtual servers.
  • Also, the number of physical servers, virtual servers and virtual drives do not have to be equal. An active and passive pair of virtual servers may be supported with three or more virtual drives, such as in a RAID level 5 configuration. In general, any number of virtual servers and any number of virtual drives is contemplated as long as redundant data is appropriately maintained in the drive set to ensure against loss of data, such as provided in many RAID configurations. Although multiple virtual servers and/or virtual drives may be located on any single physical server, it may be desired to distribute the virtual servers and drives among as many physical servers as are available to ensure against loss of data in the event of failure of any one physical server. In one embodiment, the collective set of virtual drives for the cluster may store one or more disk images used to generate one or more of the passive virtual servers. The disk image(s) incorporate(s) attributes of the active virtual server. [0049]
  • In one embodiment, the [0050] interlink 410 incorporates management functionality to setup, monitor and maintain one or more clusters in a similar manner as previously described with respect to the interlink 310. The interlink 410 may be designed, for example, with any level of intelligence to properly build a cluster configuration and to optimize cluster load and operation over time in response to available resources and dynamic loads levels and re-building in response to node failure and during and after restoration. The interlink 410 selects from among multiple passive node for promotion to active in response to a failover event based on any combination of static considerations (e.g., predetermined passive activation order) and dynamic considerations, such as existing or anticipated loads. For example, if the physical server 404 is more heavily loaded as compared to the physical server 406 during a failover event in which the active physical server 402 fails, the interlink 410 selects to promote the virtual server 416 instead of the virtual server 414. Alternatively, even if the physical servers 404 and 406 are relatively evenly loaded during the failure, if the interlink 410 has information that the physical server 404 is about to incur a greater load, the interlink 410 selects the virtual server 416 on physical server 406.
  • FIG. 5 is a block diagram of a [0051] cluster 500 similar to the cluster 300 except including a differential drive for each virtual drive. Similar components include identical reference numbers, where physical servers 302, 304, the network 306, the interlink 310, and the virtual servers 312 and 314 with operating systems 320 and 322, respectively, are included. The virtual drives 316 and 318 are replaced with static virtual drives 502 and 506, respectively, which may represent “snapshots” of the virtual drives 316 and 318 at a specific time. Changes for the virtual drive 502 are stored in a differential drive 504 rather than being immediately incorporated within the virtual drive 502. Similarly, changes for the virtual drive 506 are stored in a differential drive 508 rather than being immediately incorporated within the virtual drive 506. Thus, the static virtual drive 502 and its differential drive 504 replace the virtual drive 316 and the static virtual drive 506 and its differential drive 508 replace the virtual drive 318. The network link 324 is replaced with a network link 510 between the virtual server 312 and the differential drive 508 and the network link 326 is replaced with a network link 512 between the virtual server 314 and the differential drive 504. Although only one differential drive is shown for each virtual server, it is understood that a chain of one or more differential drives may be employed.
  • Operation of the [0052] cluster 500 is substantially similar to the cluster 300, except that the differential drives 504, 508 enable optimized disk input/output (I/O) to increase speed, efficiency and performance. The operating systems 320, 322 configure and operate the drive pair 502, 504 in a mirrored configuration with the drive pair 506, 508 in a similar manner as previously described. Again, the link 512 is optional and the OS 322 need not have access to the virtual drive 502 to maintain the integrity of the data. Furthermore, the static state of the virtual drives 502 and 506 remain intact so that a roll-back may be performed to recapture the original static states of the virtual drives 502 and 506. The changes stored in the differential drives 504, 508 may be discarded or otherwise stored and re-used to recapture the additional changes to the static states of the virtual drives 502 and 506, if desired. In this manner, the use of differential drives for a cluster provides enhanced efficiency, performance and flexibility. The use of differential drives may be extended to other RAID configurations, such as, for example, RAID level 5 configurations.
  • FIG. 6 is a block diagram of a [0053] cluster 600 similar to the cluster 300 except communicatively coupled via an inter-network 602. Similar components include identical reference numbers, where physical servers 302, 304, the interlink 310, the virtual servers 312 and 314 with operating systems 320 and 322, respectively, are included. In this case, the network 306 is replaced with the inter-network 602 illustrating geographically split locations and remote communications. The illustration of the inter-network 602 is not intended to limit the embodiments of the network 306 to a local configuration, but instead to illustrate that the network 306 may be implemented as the inter-network 602 to physically separate the physical servers 302 and 304 as far apart from each other as desired. The inter-network 602 spans any size of geographic area, including multiple networks interconnected together to span large geographic areas, such as wide-area networks (WANs) or the Internet or the like. Operation of the cluster 600 is substantially the same as the cluster 300.
  • FIG. 7 is a block diagram of a [0054] cluster 700 similar to the cluster 300 except extended to include additional physical servers to form a cluster ring or cluster chain. Similar components include identical reference numbers, where physical servers 302 and 304, the network 306, and the virtual servers 312 and 314 with operating systems 320 and 322, respectively, are included. In this case, the network 306 is extended to include at least one additional physical server (PS3) 702, which includes another virtual server 704 and corresponding OS 706. The virtual server 704 is linked to a local virtual drive 708 via local link 710. The interlink 310 is replaced with interlink 714, which is provided between the virtual servers 312 and 314 in a similar manner as the interlink 310 and is further interfaced to the virtual server 704 and/or the physical server 702. Operation of the cluster 700 is similar to the cluster 300 in which the virtual server 312 is the active server and the virtual server 314 is a passive server in an A/P cluster configuration. Also, the virtual drives 316 and 318 are operated in a mirrored configuration in which the virtual drive 318 stores a copy of the cluster data set. In the event of failover, the virtual server 314 is promoted to active status to maintain operation of the cluster 700 without data loss.
  • In one embodiment, the [0055] virtual server 314 is an inactive instance of the virtual server 312, and is preconfigured so that the OS 322, upon activation of the virtual server 314, sees the virtual drive 318 as its primary drive. Also, in a similar manner as previously described, the virtual server 314 may be preconfigured so that the OS 322 sees the virtual drive 316 as the mirrored drive in the mirrored configuration, even if not immediately available because of failure of the physical drive 302. Alternatively, in the cluster 700, a network link 712 is provided between the virtual server 314 and the virtual drive 708, and the virtual server 314 is preconfigured so that the OS 322 instead sees the virtual drive 708 as the mirrored drive in the mirrored configuration. In this manner, the virtual drive 708 is a replacement drive for the virtual drive 316 to maintain a mirrored configuration. Upon activation of the virtual server 314, the OS 322 copies the cluster data set from the virtual drive 318 to the virtual drive 708 and then maintains the virtual drives 318 and 708 in a mirrored configuration. The interlink 714 may maintain the new failover cluster configuration using the virtual server 314 as the active server regardless of whether the virtual server 312 comes back online.
  • In yet another embodiment, the [0056] virtual server 704 may also be an inactive instance of the virtual server 312, so that when the virtual server 314 is made active in response to the failover event, the interlink 714 employs the virtual server 704 as the new passive virtual server. An optional network link 716 is provided so that the OS 706 may maintain the cluster data in a mirrored configuration in response to another failover event in which the virtual server 704 is promoted to active status to replace the previously active virtual server 314. The interlink 714 is configured to make the failover decisions based on any of the considerations previously described or described further below.
  • It is appreciated that the cluster chain configuration of the [0057] cluster 700 provides at least one replacement virtual drive to replace a failed drive in the data redundant configuration. Additional physical servers and corresponding virtual drives may be included in the network to operate as replacement virtual drives. Also, the cluster chain configuration applies equally to data redundant configurations including three or more virtual drives. The cluster is preconfigured to replace a failed drive in the cluster with a new drive, and the newly active OS is configured to use the replacement drive to re-establish the data redundant configuration, albeit with a different set of virtual drives.
  • FIG. 8 is a simplified block diagram illustrating an exemplary embodiment of an [0058] interlink 800 configured as a service and with expanded capabilities, which may be employed as any of the interlinks 310, 410 and 714. The interlink 800 includes a status monitor 802, a load monitor 804 and a cluster manager 806 interfaced with each other. The status monitor 802 monitors the status of each of the active nodes (physical and virtual servers) of each cluster and provides failure notifications. The load monitor 804 monitors the relative load of the physical servers and provides load information. The cluster manager 806 configures and manages one or more clusters using a plurality of virtual servers implemented on a corresponding plurality of physical servers. The cluster manager 806 configures and maintains each cluster, manages operation of each established cluster, modifies cluster components if and when necessary or desired, and terminates clusters if and when desired.
  • For example, the [0059] cluster manager 806 receives a failure notification for a failed active node, selects a passive virtual server for promotion, and manages the failover process to maintain cluster operation without data loss. The cluster manager 806 uses load information from the load monitor 804 to determine which passive server to promote, and periodically or continuously monitors load information to optimize operation of each cluster. As previously described, the interlink (e.g., 310 or 410 or 714) may be very simple or relatively complex. It is advantageous if the interlink at least has the capability to manage the failover process, such as by selecting and promoting a passive server to active mode to resume handling of the cluster load, to simplify the configuration and operation of the virtual servers and virtual drives in the cluster. In particular, the virtual servers, operating systems, application programs, virtual drives and virtual software need not have any cluster capabilities at all or be aware of cluster operation or configuration.
  • Although the present invention has been described in considerable detail with reference to certain preferred versions thereof, other versions and variations are possible and contemplated. Those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for providing out the same purposes of the present invention without departing from the spirit and scope of the invention as defined by the following claims. [0060]

Claims (39)

1. A shared-nothing virtual cluster, comprising:
a plurality of virtual servers located on a corresponding plurality of physical servers linked together via a network;
said plurality of virtual servers collectively forming an active/passive (A/P) cluster including an active virtual server and at least one passive server;
a plurality of virtual drives, each located on a corresponding one of said plurality of physical servers;
said active virtual server handling a cluster load and executing a first operating system (OS) that operates said plurality of virtual drives in a data redundant configuration that collectively stores a data set for said cluster;
each passive virtual server coupled to a sufficient number of said plurality of virtual drives with redundant information to recover said data set for said cluster; and
an interlink operatively configured to detect failure of said active virtual server and to initiate promotion of one of said one or more passive virtual servers to active status to resume handling said cluster load after failover.
2. The shared-nothing virtual cluster of claim 1, wherein:
said plurality of virtual servers comprises said active virtual server located on a first physical server and a passive virtual server located on a second physical server;
wherein said plurality of virtual drives comprises a first virtual drive located on said first physical server and a second virtual drive located on said second physical server; and
wherein said first OS maintains said first and second virtual drives in a mirrored configuration in which said first virtual drive is a primary drive storing said data set and wherein said second virtual drive is a mirrored drive storing a copy of said data set.
3. The shared-nothing virtual cluster of claim 2, wherein said passive virtual server is an inactive instance of said active virtual server and includes a second OS that is configured, when said passive virtual server is activated, to operate said second virtual drive as its primary drive storing said data set.
4. The shared-nothing virtual cluster of claim 3, wherein said second OS is configured, when said passive virtual server is activated, to maintain said mirrored configuration in which said first virtual drive is a mirrored drive storing a copy of said data set.
5. The shared-nothing virtual cluster of claim 3, wherein said second OS is configured, when said passive virtual server is activated, to replace said first virtual drive in said mirrored configuration with a third virtual drive located on a third physical server as a mirrored drive storing a copy of said data set.
6. The shared-nothing virtual cluster of claim 1, wherein each of said plurality of virtual drives comprises a virtual static drive and at least one virtual differential drive.
7. The shared-nothing virtual cluster of claim 1, wherein said network comprises an inter-network and wherein at least one of said plurality of physical servers is physically located at a geographically remote location.
8. The shared-nothing virtual cluster of claim 1, wherein said plurality of virtual drives comprises at least three drives in said data redundant configuration.
9. The shared-nothing virtual cluster of claim 8, wherein said data redundant configuration comprises a RAID level 5 configuration.
10. The shared-nothing virtual cluster of claim 8, wherein at least one of said at least one passive virtual server comprises an inactive instance of said active virtual server including a corresponding OS that, when said inactive instance is activated, includes a network link to a majority of said plurality of virtual drives configured in said data redundant configuration.
11. The shared-nothing virtual cluster of claim 10, wherein said inactive instance is configured such that said corresponding OS uses a local one of said plurality of virtual drives as its primary drive.
12. The shared-nothing virtual cluster of claim 11, wherein said inactive instance is configured such that said corresponding OS uses a replacement virtual drive to replace a failed virtual drive in said data redundant configuration.
13. The shared-nothing virtual cluster of claim 1, wherein said interlink comprises:
a status monitor that detects failure of said active virtual server;
a load monitor that monitors relative load level of each of said plurality of physical servers; and
a cluster manager, interfaced with said status monitor and said load monitor, that configures and maintains said cluster, that manages failover, and that selects a passive server for promotion during said failover.
14. The shared-nothing virtual cluster of claim 13, wherein said cluster manager monitors load information from said load manager to ensure adequate resources before and after said failover.
15. A virtual cluster, comprising:
a plurality of virtual drives located on a corresponding plurality of physical servers coupled together via a network;
a first virtual server located on a first of said plurality of physical servers, said first virtual server being active and handling a cluster load and storing cluster data in said plurality of virtual drives organized in a data redundant configuration;
a disk image incorporating attributes of said first virtual server stored on said plurality of virtual drives; and
an interlink operative to monitor said first virtual server and to initiate promotion of a second virtual server on a second of said plurality of physical servers to active status using said disk image in the event of failure of said first virtual server to effectuate failover;
wherein said second virtual server, when activated, resumes handling of said cluster load and accesses said cluster data.
16. The virtual cluster of claim 15, wherein said plurality of virtual drives comprises a first virtual drive located on said first physical server and a second virtual drive located on said second physical server, wherein said data redundant configuration comprises a mirrored configuration in which said first virtual drive stores said cluster data and wherein said second virtual drive stores a mirrored copy of said cluster data.
17. The virtual cluster of claim 16, wherein said second virtual server is configured to use said second virtual drive as its primary drive when said second virtual server is activated.
18. The virtual cluster of claim 17, wherein said second virtual server is configured to use a third virtual drive to store said mirrored copy of said cluster data in said mirrored configuration when said second virtual server is activated.
19. The virtual cluster of claim 16, wherein said first virtual server synchronizes cluster data between said first and second virtual drives when said first virtual server and first virtual drive are available.
20. The virtual cluster of claim 16, wherein said second virtual server, when activated, synchronizes cluster data between said first and second virtual drives when said first virtual drive is available.
21. The virtual cluster of claim 15, wherein said plurality of virtual drives comprises at least three virtual drives located on a corresponding at least three of said plurality of physical servers, wherein said data redundant configuration including said at least three virtual drives collectively stores said cluster data, and wherein said second virtual server accesses said cluster data from less than all of said plurality of virtual drives when activated.
22. The virtual cluster of claim 21, wherein said data redundant configuration comprises a RAID level 5 configuration.
23. The virtual cluster of claim 21, wherein said second virtual server is configured to use a local one of said plurality of virtual drives as its primary drive when said second virtual server is activated.
24. The virtual cluster of claim 23, wherein said second virtual server is configured to use a replacement virtual drive to replace a failed virtual drive in said data redundant configuration when said second virtual server is activated.
25. The virtual cluster of claim 21, wherein said interlink activates said second virtual server on a selected one of said at least three of said plurality of physical servers.
26. The virtual cluster of claim 25, wherein said interlink monitors relative load of said at least three of said plurality of physical servers and selects a physical server based on relative load.
27. The virtual cluster of claim 15, wherein said second virtual server is an inactive instance of said active virtual server prior to failover.
28. A method of configuring and operating a shared-nothing virtual cluster in a plurality of physical servers coupled together via a network including a plurality of virtual drives each located on a corresponding one of the plurality of physical servers, the method comprising:
operating an active virtual server on a first one of the plurality of physical servers to handle a cluster load;
storing, by the active virtual server, cluster data onto the plurality of virtual drives organized in a data redundant configuration;
detecting failure of the first physical server; and
in the event of failure of the first physical server, activating a second virtual server on a second one of the plurality of physical servers to resume handling of the cluster load and providing access by the activated second virtual server to a sufficient number of the plurality of virtual drives that collectively stores the cluster data.
29. The method of claim 28, wherein said activating a second virtual server comprises activating an inactive instance of said active virtual server.
30. The method of claim 28, wherein said storing cluster data comprises storing cluster data on a first virtual drive and storing a mirrored copy of the cluster data on a second virtual drive, and wherein said providing access by the activated second virtual server comprises providing access to the second virtual drive.
31. The method of claim 28, wherein said storing cluster data comprises storing cluster data in redundant format across at least three virtual drives, and wherein said providing access by the activated second virtual server comprises providing access to less than all of said plurality of virtual servers.
32. The method of claim 28, further comprising coupling the plurality of physical servers together via an inter-network that enables the physical servers to be remotely located in a large geographical area.
33. The method of claim 28, further comprising storing a disk image including persistent attributes of the active virtual server, and wherein said activating a second virtual server comprises using the disk image.
34. The method of claim 28, further comprising:
locating each of a plurality of static virtual drives on a corresponding one of the plurality of physical servers;
locating each of a plurality of differential virtual drives with a corresponding one of the plurality of static virtual drives; and
said storing cluster data comprising storing changes of the cluster data in the plurality of differential virtual drives.
35. The method of claim 28, further comprising monitoring relative load of each of the plurality of physical servers and providing load information.
36. The method of claim 35, further comprising selecting the first physical server to initially handle the cluster load based on the load information.
37. The method of claim 35, further comprising selecting the second physical server to resume handling of the cluster load based on the load information in the event of failure of the first physical server.
38. The method of claim 28, further comprising pre-configuring said second virtual server such that when it is activated, it executes an operating system that accesses the data redundant configuration including a local one of plurality of virtual drives as its primary drive.
39. The method of claim 38, further comprising pre-configuring said second virtual server such that when it is activated, it executes an operating system that accesses the data redundant configuration including a replacement one of plurality of virtual drives to replace a failed virtual drive.
US10/858,295 2003-06-02 2004-06-01 Shared nothing virtual cluster Active 2025-10-11 US7287186B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/858,295 US7287186B2 (en) 2003-06-02 2004-06-01 Shared nothing virtual cluster

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US47499203P 2003-06-02 2003-06-02
US10/858,295 US7287186B2 (en) 2003-06-02 2004-06-01 Shared nothing virtual cluster

Publications (2)

Publication Number Publication Date
US20040243650A1 true US20040243650A1 (en) 2004-12-02
US7287186B2 US7287186B2 (en) 2007-10-23

Family

ID=33457655

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/858,295 Active 2025-10-11 US7287186B2 (en) 2003-06-02 2004-06-01 Shared nothing virtual cluster

Country Status (1)

Country Link
US (1) US7287186B2 (en)

Cited By (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040210591A1 (en) * 2002-03-18 2004-10-21 Surgient, Inc. Server file management
US20050097394A1 (en) * 2000-03-22 2005-05-05 Yao Wang Method and apparatus for providing host resources for an electronic commerce site
US20050267888A1 (en) * 2004-05-26 2005-12-01 Masataka Kan Method for process substitution on a database management system
US20050289540A1 (en) * 2004-06-24 2005-12-29 Lu Nguyen Providing on-demand capabilities using virtual machines and clustering processes
US20050289218A1 (en) * 2004-06-28 2005-12-29 Rothman Michael A Method to enable remote storage utilization
US20060010176A1 (en) * 2004-06-16 2006-01-12 Armington John P Systems and methods for migrating a server from one physical platform to a different physical platform
US20060070060A1 (en) * 2004-09-28 2006-03-30 International Business Machines Corporation Coordinating service performance and application placement management
US20060075101A1 (en) * 2004-09-29 2006-04-06 International Business Machines Corporation Method, system, and computer program product for supporting a large number of intermittently used application clusters
US20060155912A1 (en) * 2005-01-12 2006-07-13 Dell Products L.P. Server cluster having a virtual server
US20070006015A1 (en) * 2005-06-29 2007-01-04 Rao Sudhir G Fault-tolerance and fault-containment models for zoning clustered application silos into continuous availability and high availability zones in clustered systems during recovery and maintenance
US20070079171A1 (en) * 2005-09-30 2007-04-05 Mehrdad Aidun No data loss it disaster recovery over extended distances
US20070078861A1 (en) * 2005-09-30 2007-04-05 Mehrdad Aidun Disaster recover/continuity of business adaptive solution framework
US20070078982A1 (en) * 2005-09-30 2007-04-05 Mehrdad Aidun Application of virtual servers to high availability and disaster recovery soultions
US20070115818A1 (en) * 2005-11-04 2007-05-24 Bose Patrick G Triggered notification
US20070220028A1 (en) * 2006-03-15 2007-09-20 Masami Hikawa Method and system for managing load balancing in data-processing system
US7293154B1 (en) * 2004-11-18 2007-11-06 Symantec Operating Corporation System and method for optimizing storage operations by operating only on mapped blocks
US20080071793A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Using network access port linkages for data structure update decisions
US20080126834A1 (en) * 2006-08-31 2008-05-29 Dell Products, Lp On-demand provisioning of computer resources in physical/virtual cluster environments
US20080168193A1 (en) * 2007-01-10 2008-07-10 International Business Machines Corporation Use of unique identifiers for each data format supported by a multi-format data store
US20080195760A1 (en) * 2007-02-14 2008-08-14 Yakov Nudler Virtual Personal Computer Access Over Multiple Network Sites
US20080243866A1 (en) * 2007-03-29 2008-10-02 Manu Pandey System and method for improving cluster performance
US7480816B1 (en) 2005-08-04 2009-01-20 Sun Microsystems, Inc. Failure chain detection and recovery in a group of cooperating systems
US20090089406A1 (en) * 2007-09-30 2009-04-02 Sun Microsystems, Inc. Virtual cluster based upon operating system virtualization
US20090217021A1 (en) * 2008-02-22 2009-08-27 Garth Richard Goodson System and method for fast restart of a guest operating system in a virtual machine environment
US20090228541A1 (en) * 2008-03-04 2009-09-10 Barsness Eric L Network virtualization in a multi-node system with multiple networks
US20100169497A1 (en) * 2008-12-31 2010-07-01 Sap Ag Systems and methods for integrating local systems with cloud computing resources
US20100169477A1 (en) * 2008-12-31 2010-07-01 Sap Ag Systems and methods for dynamically provisioning cloud computing resources
US7877553B2 (en) 2007-08-06 2011-01-25 Microsoft Corporation Sharing volume data via shadow copies using differential areas
US20120005670A1 (en) * 2010-06-30 2012-01-05 Sap Ag Distributed cloud computing architecture
US20120150815A1 (en) * 2010-12-09 2012-06-14 Ibm Corporation Efficient backup and restore of virtual input/output server (vios) cluster
US20120246642A1 (en) * 2011-03-24 2012-09-27 Ibm Corporation Management of File Images in a Virtual Environment
US20130198559A1 (en) * 2006-12-21 2013-08-01 Maxsp Corporation Virtual recovery server
US20130204995A1 (en) * 2010-06-18 2013-08-08 Nokia Siemens Networks Oy Server cluster
US8745171B1 (en) 2006-12-21 2014-06-03 Maxsp Corporation Warm standby appliance
US8761546B2 (en) 2007-10-26 2014-06-24 Maxsp Corporation Method of and system for enhanced data storage
US8812613B2 (en) 2004-06-03 2014-08-19 Maxsp Corporation Virtual application manager
US8811396B2 (en) 2006-05-24 2014-08-19 Maxsp Corporation System for and method of securing a network utilizing credentials
US20140237306A1 (en) * 2013-02-19 2014-08-21 Nec Corporation Management device, management method, and medium
US20140280956A1 (en) * 2013-03-14 2014-09-18 Vmware, Inc. Methods and systems to manage computer resources in elastic multi-tenant cloud computing systems
US8898319B2 (en) 2006-05-24 2014-11-25 Maxsp Corporation Applications and services as a bundle
US20140380087A1 (en) * 2013-06-25 2014-12-25 International Business Machines Corporation Fault Tolerance Solution for Stateful Applications
US8977887B2 (en) 2007-10-26 2015-03-10 Maxsp Corporation Disaster recovery appliance
US20150242289A1 (en) * 2012-11-20 2015-08-27 Hitachi, Ltd. Storage system and data management method
US9154367B1 (en) * 2011-12-27 2015-10-06 Google Inc. Load balancing and content preservation
WO2015190934A1 (en) * 2014-06-13 2015-12-17 Mhwirth As Method and system for controlling well operations
US9317506B2 (en) 2006-09-22 2016-04-19 Microsoft Technology Licensing, Llc Accelerated data transfer using common prior data segments
US9357031B2 (en) 2004-06-03 2016-05-31 Microsoft Technology Licensing, Llc Applications as a service
US20160197795A1 (en) * 2005-02-28 2016-07-07 Microsoft Technology Licensing, Llc Discovering and monitoring server clusters
US9448858B2 (en) 2007-10-26 2016-09-20 Microsoft Technology Licensing, Llc Environment manager
US9680699B2 (en) 2006-09-19 2017-06-13 Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US9720619B1 (en) * 2012-12-19 2017-08-01 Springpath, Inc. System and methods for efficient snapshots in a distributed system of hybrid storage and compute nodes
US10397087B1 (en) * 2016-12-27 2019-08-27 EMC IP Holding Company LLC Status monitoring system and method
US20200042394A1 (en) * 2018-07-31 2020-02-06 EMC IP Holding Company LLC Managing journaling resources with copies stored in multiple locations
US11385981B1 (en) * 2018-12-28 2022-07-12 Virtuozzo International Gmbh System and method for deploying servers in a distributed storage to improve fault tolerance
US11416354B2 (en) * 2019-09-05 2022-08-16 EMC IP Holding Company LLC Techniques for providing intersite high availability of data nodes in a virtual cluster
US11435916B2 (en) * 2019-06-26 2022-09-06 EMC IP Holding Company LLC Mapping of data storage system for a redundant array of independent nodes
US20230336621A1 (en) * 2022-04-15 2023-10-19 Avaya Management L.P. Call and media preserving failovers in a cloud environment

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418702B2 (en) * 2002-08-06 2008-08-26 Sheng (Ted) Tai Tsao Concurrent web based multi-task support for control management system
US7769004B2 (en) * 2003-09-26 2010-08-03 Surgient, Inc. Network abstraction and isolation layer for masquerading machine identity of a computer
US7515589B2 (en) * 2004-08-27 2009-04-07 International Business Machines Corporation Method and apparatus for providing network virtualization
US7444538B2 (en) * 2004-09-21 2008-10-28 International Business Machines Corporation Fail-over cluster with load-balancing capability
EP1806657B1 (en) * 2004-10-18 2010-05-26 Fujitsu Ltd. Operation management program, operation management method, and operation management device
EP1814027A4 (en) * 2004-10-18 2009-04-29 Fujitsu Ltd Operation management program, operation management method, and operation management apparatus
EP1811376A4 (en) * 2004-10-18 2007-12-26 Fujitsu Ltd Operation management program, operation management method, and operation management apparatus
JP4462024B2 (en) 2004-12-09 2010-05-12 株式会社日立製作所 Failover method by disk takeover
US7505401B2 (en) * 2005-01-31 2009-03-17 International Business Machines Corporation Method, apparatus and program storage device for providing mutual failover and load-balancing between interfaces in a network
JP4839841B2 (en) * 2006-01-04 2011-12-21 株式会社日立製作所 How to restart snapshot
US8078728B1 (en) 2006-03-31 2011-12-13 Quest Software, Inc. Capacity pooling for application reservation and delivery
US7797566B2 (en) * 2006-07-11 2010-09-14 Check Point Software Technologies Ltd. Application cluster in security gateway for high availability and load sharing
US7840839B2 (en) * 2007-11-06 2010-11-23 Vmware, Inc. Storage handling for fault tolerance in virtual machines
JP5011073B2 (en) * 2007-11-22 2012-08-29 株式会社日立製作所 Server switching method and server system
US8194674B1 (en) 2007-12-20 2012-06-05 Quest Software, Inc. System and method for aggregating communications and for translating between overlapping internal network addresses and unique external network addresses
US8065559B2 (en) * 2008-05-29 2011-11-22 Citrix Systems, Inc. Systems and methods for load balancing via a plurality of virtual servers upon failover using metrics from a backup virtual server
US8996909B2 (en) * 2009-10-08 2015-03-31 Microsoft Corporation Modeling distribution and failover database connectivity behavior
US8477610B2 (en) * 2010-05-31 2013-07-02 Microsoft Corporation Applying policies to schedule network bandwidth among virtual machines
JP5170794B2 (en) * 2010-09-28 2013-03-27 株式会社バッファロー Storage system and failover control method
US8756455B2 (en) 2011-11-17 2014-06-17 Microsoft Corporation Synchronized failover for active-passive applications
US20130304901A1 (en) * 2012-05-11 2013-11-14 James Malnati Automated integration of disparate system management tools
US9471586B2 (en) 2013-01-10 2016-10-18 International Business Machines Corporation Intelligent selection of replication node for file data blocks in GPFS-SNC
CN103973470A (en) * 2013-01-31 2014-08-06 国际商业机器公司 Cluster management method and equipment for shared-nothing cluster
US9280428B2 (en) 2013-04-23 2016-03-08 Neftali Ripoll Method for designing a hyper-visor cluster that does not require a shared storage device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5917997A (en) * 1996-12-06 1999-06-29 International Business Machines Corporation Host identity takeover using virtual internet protocol (IP) addressing
US6247057B1 (en) * 1998-10-22 2001-06-12 Microsoft Corporation Network server supporting multiple instance of services to operate concurrently by having endpoint mapping subsystem for mapping virtual network names to virtual endpoint IDs
US20030018927A1 (en) * 2001-07-23 2003-01-23 Gadir Omar M.A. High-availability cluster virtual server system
US6553401B1 (en) * 1999-07-09 2003-04-22 Ncr Corporation System for implementing a high volume availability server cluster including both sharing volume of a mass storage on a local site and mirroring a shared volume on a remote site
US20030105829A1 (en) * 2001-11-28 2003-06-05 Yotta Yotta, Inc. Systems and methods for implementing content sensitive routing over a wide area network (WAN)
US6625705B2 (en) * 1993-04-23 2003-09-23 Emc Corporation Remote data mirroring system having a service processor
US20030188233A1 (en) * 2002-03-28 2003-10-02 Clark Lubbers System and method for automatic site failover in a storage area network
US20040078467A1 (en) * 2000-11-02 2004-04-22 George Grosner Switching system
US6745303B2 (en) * 2002-01-03 2004-06-01 Hitachi, Ltd. Data synchronization of multiple remote storage
US7043665B2 (en) * 2003-06-18 2006-05-09 International Business Machines Corporation Method, system, and program for handling a failover to a remote storage location
US7065589B2 (en) * 2003-06-23 2006-06-20 Hitachi, Ltd. Three data center remote copy system with journaling
US7200622B2 (en) * 2004-03-19 2007-04-03 Hitachi, Ltd. Inter-server dynamic transfer method for virtual file servers
US7222172B2 (en) * 2002-04-26 2007-05-22 Hitachi, Ltd. Storage system having virtualized resource
US7234075B2 (en) * 2003-12-30 2007-06-19 Dell Products L.P. Distributed failover aware storage area network backup of application data in an active-N high availability cluster

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4912628A (en) 1988-03-15 1990-03-27 International Business Machines Corp. Suspending and resuming processing of tasks running in a virtual machine data processing system
US5201049A (en) 1988-09-29 1993-04-06 International Business Machines Corporation System for executing applications program concurrently/serially on different virtual machines
US5062037A (en) 1988-10-24 1991-10-29 Ibm Corp. Method to provide concurrent execution of distributed application programs by a host computer and an intelligent work station on an sna network
US5802290A (en) 1992-07-29 1998-09-01 Virtual Computer Corporation Computer network of distributed virtual computers which are EAC reconfigurable in response to instruction to be executed
US5555376A (en) 1993-12-03 1996-09-10 Xerox Corporation Method for granting a user request having locational and contextual attributes consistent with user policies for devices having locational attributes consistent with the user request
SE9402059D0 (en) 1994-06-13 1994-06-13 Ellemtel Utvecklings Ab Methods and apparatus for telecommunications
US5996026A (en) 1995-09-05 1999-11-30 Hitachi, Ltd. Method and apparatus for connecting i/o channels between sub-channels and devices through virtual machines controlled by a hypervisor using ID and configuration information
US6272523B1 (en) 1996-12-20 2001-08-07 International Business Machines Corporation Distributed networking using logical processes
US6003050A (en) 1997-04-02 1999-12-14 Microsoft Corporation Method for integrating a virtual machine with input method editors
US6075938A (en) 1997-06-10 2000-06-13 The Board Of Trustees Of The Leland Stanford Junior University Virtual machine monitors for scalable multiprocessors
US6041347A (en) 1997-10-24 2000-03-21 Unified Access Communications Computer system and computer-implemented process for simultaneous configuration and monitoring of a computer network
US6272537B1 (en) 1997-11-17 2001-08-07 Fujitsu Limited Method for building element manager for a computer network element using a visual element manager builder process
US6256637B1 (en) 1998-05-05 2001-07-03 Gemstone Systems, Inc. Transactional virtual machine architecture
US6496847B1 (en) 1998-05-15 2002-12-17 Vmware, Inc. System and method for virtualizing computer systems

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6625705B2 (en) * 1993-04-23 2003-09-23 Emc Corporation Remote data mirroring system having a service processor
US5917997A (en) * 1996-12-06 1999-06-29 International Business Machines Corporation Host identity takeover using virtual internet protocol (IP) addressing
US6247057B1 (en) * 1998-10-22 2001-06-12 Microsoft Corporation Network server supporting multiple instance of services to operate concurrently by having endpoint mapping subsystem for mapping virtual network names to virtual endpoint IDs
US6553401B1 (en) * 1999-07-09 2003-04-22 Ncr Corporation System for implementing a high volume availability server cluster including both sharing volume of a mass storage on a local site and mirroring a shared volume on a remote site
US20040078467A1 (en) * 2000-11-02 2004-04-22 George Grosner Switching system
US20030018927A1 (en) * 2001-07-23 2003-01-23 Gadir Omar M.A. High-availability cluster virtual server system
US20030105829A1 (en) * 2001-11-28 2003-06-05 Yotta Yotta, Inc. Systems and methods for implementing content sensitive routing over a wide area network (WAN)
US6745303B2 (en) * 2002-01-03 2004-06-01 Hitachi, Ltd. Data synchronization of multiple remote storage
US20030188233A1 (en) * 2002-03-28 2003-10-02 Clark Lubbers System and method for automatic site failover in a storage area network
US7222172B2 (en) * 2002-04-26 2007-05-22 Hitachi, Ltd. Storage system having virtualized resource
US7043665B2 (en) * 2003-06-18 2006-05-09 International Business Machines Corporation Method, system, and program for handling a failover to a remote storage location
US7065589B2 (en) * 2003-06-23 2006-06-20 Hitachi, Ltd. Three data center remote copy system with journaling
US7234075B2 (en) * 2003-12-30 2007-06-19 Dell Products L.P. Distributed failover aware storage area network backup of application data in an active-N high availability cluster
US7200622B2 (en) * 2004-03-19 2007-04-03 Hitachi, Ltd. Inter-server dynamic transfer method for virtual file servers

Cited By (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7475285B2 (en) * 2000-03-22 2009-01-06 Emc Corporation Method and apparatus for providing host resources for an electronic commerce site
US20050097394A1 (en) * 2000-03-22 2005-05-05 Yao Wang Method and apparatus for providing host resources for an electronic commerce site
US7257584B2 (en) * 2002-03-18 2007-08-14 Surgient, Inc. Server file management
US20040210591A1 (en) * 2002-03-18 2004-10-21 Surgient, Inc. Server file management
US20050267888A1 (en) * 2004-05-26 2005-12-01 Masataka Kan Method for process substitution on a database management system
US7536422B2 (en) * 2004-05-26 2009-05-19 Hitachi, Ltd. Method for process substitution on a database management system
US9357031B2 (en) 2004-06-03 2016-05-31 Microsoft Technology Licensing, Llc Applications as a service
US9569194B2 (en) 2004-06-03 2017-02-14 Microsoft Technology Licensing, Llc Virtual application manager
US8812613B2 (en) 2004-06-03 2014-08-19 Maxsp Corporation Virtual application manager
US20060010176A1 (en) * 2004-06-16 2006-01-12 Armington John P Systems and methods for migrating a server from one physical platform to a different physical platform
US7769720B2 (en) * 2004-06-16 2010-08-03 Hewlett-Packard Development Company, L.P. Systems and methods for migrating a server from one physical platform to a different physical platform
US7577959B2 (en) * 2004-06-24 2009-08-18 International Business Machines Corporation Providing on-demand capabilities using virtual machines and clustering processes
US20050289540A1 (en) * 2004-06-24 2005-12-29 Lu Nguyen Providing on-demand capabilities using virtual machines and clustering processes
US20050289218A1 (en) * 2004-06-28 2005-12-29 Rothman Michael A Method to enable remote storage utilization
US8224465B2 (en) * 2004-09-28 2012-07-17 International Business Machines Corporation Coordinating service performance and application placement management
US7720551B2 (en) * 2004-09-28 2010-05-18 International Business Machines Corporation Coordinating service performance and application placement management
US20060070060A1 (en) * 2004-09-28 2006-03-30 International Business Machines Corporation Coordinating service performance and application placement management
US20080216088A1 (en) * 2004-09-28 2008-09-04 Tantawi Asser N Coordinating service performance and application placement management
US20100223379A1 (en) * 2004-09-28 2010-09-02 International Business Machines Corporation Coordinating service performance and application placement management
US20060075101A1 (en) * 2004-09-29 2006-04-06 International Business Machines Corporation Method, system, and computer program product for supporting a large number of intermittently used application clusters
US7552215B2 (en) * 2004-09-29 2009-06-23 International Business Machines Corporation Method, system, and computer program product for supporting a large number of intermittently used application clusters
US7293154B1 (en) * 2004-11-18 2007-11-06 Symantec Operating Corporation System and method for optimizing storage operations by operating only on mapped blocks
US20060155912A1 (en) * 2005-01-12 2006-07-13 Dell Products L.P. Server cluster having a virtual server
US20160197795A1 (en) * 2005-02-28 2016-07-07 Microsoft Technology Licensing, Llc Discovering and monitoring server clusters
US10348577B2 (en) * 2005-02-28 2019-07-09 Microsoft Technology Licensing, Llc Discovering and monitoring server clusters
US8195976B2 (en) * 2005-06-29 2012-06-05 International Business Machines Corporation Fault-tolerance and fault-containment models for zoning clustered application silos into continuous availability and high availability zones in clustered systems during recovery and maintenance
US8286026B2 (en) 2005-06-29 2012-10-09 International Business Machines Corporation Fault-tolerance and fault-containment models for zoning clustered application silos into continuous availability and high availability zones in clustered systems during recovery and maintenance
US20070006015A1 (en) * 2005-06-29 2007-01-04 Rao Sudhir G Fault-tolerance and fault-containment models for zoning clustered application silos into continuous availability and high availability zones in clustered systems during recovery and maintenance
US7480816B1 (en) 2005-08-04 2009-01-20 Sun Microsystems, Inc. Failure chain detection and recovery in a group of cooperating systems
WO2007041288A2 (en) * 2005-09-30 2007-04-12 Lockheed Martin Corporation Application of virtual servers to high availability and disaster recovery solutions
WO2007041288A3 (en) * 2005-09-30 2009-05-07 Lockheed Corp Application of virtual servers to high availability and disaster recovery solutions
US7577868B2 (en) 2005-09-30 2009-08-18 Lockheed Martin Corporation No data loss IT disaster recovery over extended distances
US20070079171A1 (en) * 2005-09-30 2007-04-05 Mehrdad Aidun No data loss it disaster recovery over extended distances
US20070078861A1 (en) * 2005-09-30 2007-04-05 Mehrdad Aidun Disaster recover/continuity of business adaptive solution framework
US20090271658A1 (en) * 2005-09-30 2009-10-29 Lockheed Martin Corporation No data loss it disaster recovery over extended distances
GB2446737B (en) * 2005-09-30 2010-11-10 Lockheed Corp Application of virtual servers to high availability and disaster recovery solutions
US7934116B2 (en) 2005-09-30 2011-04-26 Lockheed Martin Corporation Disaster recover/continuity of business adaptive solution framework
US20070078982A1 (en) * 2005-09-30 2007-04-05 Mehrdad Aidun Application of virtual servers to high availability and disaster recovery soultions
AU2006297144B2 (en) * 2005-09-30 2011-11-24 Lockheed Martin Corporation Application of virtual servers to high availability and disaster recovery solutions
US7933987B2 (en) 2005-09-30 2011-04-26 Lockheed Martin Corporation Application of virtual servers to high availability and disaster recovery solutions
US20070115818A1 (en) * 2005-11-04 2007-05-24 Bose Patrick G Triggered notification
US8554980B2 (en) * 2005-11-04 2013-10-08 Cisco Technology, Inc. Triggered notification
US20070220028A1 (en) * 2006-03-15 2007-09-20 Masami Hikawa Method and system for managing load balancing in data-processing system
US9906418B2 (en) 2006-05-24 2018-02-27 Microsoft Technology Licensing, Llc Applications and services as a bundle
US9160735B2 (en) 2006-05-24 2015-10-13 Microsoft Technology Licensing, Llc System for and method of securing a network utilizing credentials
US8898319B2 (en) 2006-05-24 2014-11-25 Maxsp Corporation Applications and services as a bundle
US8811396B2 (en) 2006-05-24 2014-08-19 Maxsp Corporation System for and method of securing a network utilizing credentials
US10511495B2 (en) 2006-05-24 2019-12-17 Microsoft Technology Licensing, Llc Applications and services as a bundle
US9893961B2 (en) 2006-05-24 2018-02-13 Microsoft Technology Licensing, Llc Applications and services as a bundle
US9584480B2 (en) 2006-05-24 2017-02-28 Microsoft Technology Licensing, Llc System for and method of securing a network utilizing credentials
US7814364B2 (en) * 2006-08-31 2010-10-12 Dell Products, Lp On-demand provisioning of computer resources in physical/virtual cluster environments
US20080126834A1 (en) * 2006-08-31 2008-05-29 Dell Products, Lp On-demand provisioning of computer resources in physical/virtual cluster environments
US20080071793A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Using network access port linkages for data structure update decisions
US9680699B2 (en) 2006-09-19 2017-06-13 Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US9317506B2 (en) 2006-09-22 2016-04-19 Microsoft Technology Licensing, Llc Accelerated data transfer using common prior data segments
US9645900B2 (en) 2006-12-21 2017-05-09 Microsoft Technology Licensing, Llc Warm standby appliance
US8745171B1 (en) 2006-12-21 2014-06-03 Maxsp Corporation Warm standby appliance
US20130198559A1 (en) * 2006-12-21 2013-08-01 Maxsp Corporation Virtual recovery server
US20080168193A1 (en) * 2007-01-10 2008-07-10 International Business Machines Corporation Use of unique identifiers for each data format supported by a multi-format data store
US7975024B2 (en) * 2007-02-14 2011-07-05 Yakov Nudler Virtual personal computer access over multiple network sites
US20080195760A1 (en) * 2007-02-14 2008-08-14 Yakov Nudler Virtual Personal Computer Access Over Multiple Network Sites
US20080243866A1 (en) * 2007-03-29 2008-10-02 Manu Pandey System and method for improving cluster performance
US8112593B2 (en) * 2007-03-29 2012-02-07 Netapp, Inc. System and method for improving cluster performance
US8726072B1 (en) 2007-03-29 2014-05-13 Netapp, Inc. System and method for improving cluster performance using an operation thread for passive nodes
US7877553B2 (en) 2007-08-06 2011-01-25 Microsoft Corporation Sharing volume data via shadow copies using differential areas
US20090089406A1 (en) * 2007-09-30 2009-04-02 Sun Microsystems, Inc. Virtual cluster based upon operating system virtualization
EP2206040B1 (en) * 2007-09-30 2018-02-14 Oracle America, Inc. Virtual cluster based upon operating system virtualization
US8200738B2 (en) * 2007-09-30 2012-06-12 Oracle America, Inc. Virtual cluster based upon operating system virtualization
US9448858B2 (en) 2007-10-26 2016-09-20 Microsoft Technology Licensing, Llc Environment manager
US8977887B2 (en) 2007-10-26 2015-03-10 Maxsp Corporation Disaster recovery appliance
US8761546B2 (en) 2007-10-26 2014-06-24 Maxsp Corporation Method of and system for enhanced data storage
US9092374B2 (en) 2007-10-26 2015-07-28 Maxsp Corporation Method of and system for enhanced data storage
US8006079B2 (en) * 2008-02-22 2011-08-23 Netapp, Inc. System and method for fast restart of a guest operating system in a virtual machine environment
US20090217021A1 (en) * 2008-02-22 2009-08-27 Garth Richard Goodson System and method for fast restart of a guest operating system in a virtual machine environment
US7958184B2 (en) * 2008-03-04 2011-06-07 International Business Machines Corporation Network virtualization in a multi-node system with multiple networks
US20090228541A1 (en) * 2008-03-04 2009-09-10 Barsness Eric L Network virtualization in a multi-node system with multiple networks
US8117317B2 (en) * 2008-12-31 2012-02-14 Sap Ag Systems and methods for integrating local systems with cloud computing resources
US8316139B2 (en) * 2008-12-31 2012-11-20 Sap Ag Systems and methods for integrating local systems with cloud computing resources
US8190740B2 (en) * 2008-12-31 2012-05-29 Sap Ag Systems and methods for dynamically provisioning cloud computing resources
US7996525B2 (en) * 2008-12-31 2011-08-09 Sap Ag Systems and methods for dynamically provisioning cloud computing resources
US20100169497A1 (en) * 2008-12-31 2010-07-01 Sap Ag Systems and methods for integrating local systems with cloud computing resources
US20100169477A1 (en) * 2008-12-31 2010-07-01 Sap Ag Systems and methods for dynamically provisioning cloud computing resources
US20120124129A1 (en) * 2008-12-31 2012-05-17 Sap Ag Systems and Methods for Integrating Local Systems with Cloud Computing Resources
US20130204995A1 (en) * 2010-06-18 2013-08-08 Nokia Siemens Networks Oy Server cluster
EP2583417A4 (en) * 2010-06-18 2016-03-02 Nokia Solutions & Networks Oy Server cluster
US8631406B2 (en) * 2010-06-30 2014-01-14 Sap Ag Distributed cloud computing architecture
US20120005670A1 (en) * 2010-06-30 2012-01-05 Sap Ag Distributed cloud computing architecture
US8392378B2 (en) * 2010-12-09 2013-03-05 International Business Machines Corporation Efficient backup and restore of virtual input/output server (VIOS) cluster
US20120150815A1 (en) * 2010-12-09 2012-06-14 Ibm Corporation Efficient backup and restore of virtual input/output server (vios) cluster
US8819190B2 (en) * 2011-03-24 2014-08-26 International Business Machines Corporation Management of file images in a virtual environment
US20120246642A1 (en) * 2011-03-24 2012-09-27 Ibm Corporation Management of File Images in a Virtual Environment
US9154367B1 (en) * 2011-12-27 2015-10-06 Google Inc. Load balancing and content preservation
US9317381B2 (en) * 2012-11-20 2016-04-19 Hitachi, Ltd. Storage system and data management method
US20150242289A1 (en) * 2012-11-20 2015-08-27 Hitachi, Ltd. Storage system and data management method
US10019459B1 (en) 2012-12-19 2018-07-10 Springpath, LLC Distributed deduplication in a distributed system of hybrid storage and compute nodes
US9965203B1 (en) * 2012-12-19 2018-05-08 Springpath, LLC Systems and methods for implementing an enterprise-class converged compute-network-storage appliance
US9720619B1 (en) * 2012-12-19 2017-08-01 Springpath, Inc. System and methods for efficient snapshots in a distributed system of hybrid storage and compute nodes
US9300530B2 (en) * 2013-02-19 2016-03-29 Nec Corporation Management device, management method, and medium
US20140237306A1 (en) * 2013-02-19 2014-08-21 Nec Corporation Management device, management method, and medium
US20140280956A1 (en) * 2013-03-14 2014-09-18 Vmware, Inc. Methods and systems to manage computer resources in elastic multi-tenant cloud computing systems
US9571567B2 (en) * 2013-03-14 2017-02-14 Vmware, Inc. Methods and systems to manage computer resources in elastic multi-tenant cloud computing systems
US20140380087A1 (en) * 2013-06-25 2014-12-25 International Business Machines Corporation Fault Tolerance Solution for Stateful Applications
US9110864B2 (en) * 2013-06-25 2015-08-18 International Business Machines Corporation Fault tolerance solution for stateful applications
WO2015190934A1 (en) * 2014-06-13 2015-12-17 Mhwirth As Method and system for controlling well operations
GB2542067B (en) * 2014-06-13 2019-01-09 Mhwirth As Method and system for controlling well operations
US10316623B2 (en) * 2014-06-13 2019-06-11 Mhwirth As Method and system for controlling well operations
GB2542067A (en) * 2014-06-13 2017-03-08 Mhwirth As Method and system for controlling well operations
US10938703B1 (en) 2016-12-27 2021-03-02 EMC IP Holding Company, LLC Status monitoring system and method
US10397087B1 (en) * 2016-12-27 2019-08-27 EMC IP Holding Company LLC Status monitoring system and method
US20200042394A1 (en) * 2018-07-31 2020-02-06 EMC IP Holding Company LLC Managing journaling resources with copies stored in multiple locations
US10824512B2 (en) * 2018-07-31 2020-11-03 EMC IP Holding Company LLC Managing journaling resources with copies stored in multiple locations
US11385981B1 (en) * 2018-12-28 2022-07-12 Virtuozzo International Gmbh System and method for deploying servers in a distributed storage to improve fault tolerance
US20220350716A1 (en) * 2018-12-28 2022-11-03 Virtuozzo International Gmbh System and method for booting servers in a distributed storage to improve fault tolerance
US11435916B2 (en) * 2019-06-26 2022-09-06 EMC IP Holding Company LLC Mapping of data storage system for a redundant array of independent nodes
US11416354B2 (en) * 2019-09-05 2022-08-16 EMC IP Holding Company LLC Techniques for providing intersite high availability of data nodes in a virtual cluster
US20230336621A1 (en) * 2022-04-15 2023-10-19 Avaya Management L.P. Call and media preserving failovers in a cloud environment
US11888928B2 (en) * 2022-04-15 2024-01-30 Avaya Management L.P. Call and media preserving failovers in a cloud environment

Also Published As

Publication number Publication date
US7287186B2 (en) 2007-10-23

Similar Documents

Publication Publication Date Title
US7287186B2 (en) Shared nothing virtual cluster
JP6514308B2 (en) Failover and Recovery for Replicated Data Instances
EP1410229B1 (en) HIGH-AVAILABILITY CLUSTER VIRTUAL SERVER SYSTEM and method
CN107111457B (en) Non-disruptive controller replacement in cross-cluster redundancy configuration
WO2019085875A1 (en) Configuration modification method for storage cluster, storage cluster and computer system
JP4307673B2 (en) Method and apparatus for configuring and managing a multi-cluster computer system
EP2883147B1 (en) Synchronous local and cross-site failover in clustered storage systems
US5129080A (en) Method and system increasing the operational availability of a system of computer programs operating in a distributed system of computers
US7788524B2 (en) Fault-tolerant networks
JP4751117B2 (en) Failover and data migration using data replication
US7680994B2 (en) Automatically managing the state of replicated data of a computing environment, and methods therefor
US9785691B2 (en) Method and apparatus for sequencing transactions globally in a distributed database cluster
JP4457184B2 (en) Failover processing in the storage system
US8375363B2 (en) Mechanism to change firmware in a high availability single processor system
US8001079B2 (en) System and method for system state replication
US8856091B2 (en) Method and apparatus for sequencing transactions globally in distributed database cluster
US7188237B2 (en) Reboot manager usable to change firmware in a high availability single processor system
US9817721B1 (en) High availability management techniques for cluster resources
JP2019219954A (en) Cluster storage system, data management control method, and data management control program
US8316110B1 (en) System and method for clustering standalone server applications and extending cluster functionality
JP2019536167A (en) Method and apparatus for dynamically managing access to logical unit numbers in a distributed storage area network environment
Chittigala Business Resiliency for Enterprise Blockchain Payment Systems
Davis et al. High-Availability Options

Legal Events

Date Code Title Description
AS Assignment

Owner name: SURGIENT, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCCRORY, DAVE D.;HIRSCHFELD, ROBERT A.;REEL/FRAME:015272/0717

Effective date: 20040609

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: SQUARE 1 BANK, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNOR:SURGIENT, INC.;REEL/FRAME:020325/0985

Effective date: 20071130

Owner name: SQUARE 1 BANK,NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNOR:SURGIENT, INC.;REEL/FRAME:020325/0985

Effective date: 20071130

AS Assignment

Owner name: ESCALATE CAPITAL I, L.P., A DELAWARE LIMITED PARTN

Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT TO THAT CERTAIN LOAN AGREEMENT;ASSIGNOR:SURGIENT, INC., A DELAWARE CORPORATION;REEL/FRAME:021709/0971

Effective date: 20080730

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: QUEST SOFTWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SURGIENT, INC.;REEL/FRAME:025381/0523

Effective date: 20101006

AS Assignment

Owner name: WELLS FARGO CAPITAL FINANCE, LLC, AS AGENT, CALIFO

Free format text: AMENDMENT NUMBER SIX TO PATENT SECURITY AGREEMENT;ASSIGNORS:QUEST SOFTWARE, INC.;AELITA SOFTWARE CORPORATION;SCRIPTLOGIC CORPORATION;AND OTHERS;REEL/FRAME:025608/0173

Effective date: 20110107

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: AELITA SOFTWARE CORPORATION, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN PATENT COLLATERAL;ASSIGNOR:WELLS FARGO CAPITAL FINANCE, LLC (FORMERLY KNOWN AS WELLS FARGO FOOTHILL, LLC);REEL/FRAME:029050/0679

Effective date: 20120927

Owner name: VIZIONCORE, INC., ILLINOIS

Free format text: RELEASE OF SECURITY INTEREST IN PATENT COLLATERAL;ASSIGNOR:WELLS FARGO CAPITAL FINANCE, LLC (FORMERLY KNOWN AS WELLS FARGO FOOTHILL, LLC);REEL/FRAME:029050/0679

Effective date: 20120927

Owner name: SCRIPTLOGIC CORPORATION, FLORIDA

Free format text: RELEASE OF SECURITY INTEREST IN PATENT COLLATERAL;ASSIGNOR:WELLS FARGO CAPITAL FINANCE, LLC (FORMERLY KNOWN AS WELLS FARGO FOOTHILL, LLC);REEL/FRAME:029050/0679

Effective date: 20120927

Owner name: QUEST SOFTWARE, INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN PATENT COLLATERAL;ASSIGNOR:WELLS FARGO CAPITAL FINANCE, LLC (FORMERLY KNOWN AS WELLS FARGO FOOTHILL, LLC);REEL/FRAME:029050/0679

Effective date: 20120927

Owner name: NETPRO COMPUTING, INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN PATENT COLLATERAL;ASSIGNOR:WELLS FARGO CAPITAL FINANCE, LLC (FORMERLY KNOWN AS WELLS FARGO FOOTHILL, LLC);REEL/FRAME:029050/0679

Effective date: 20120927

AS Assignment

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:QUEST SOFTWARE, INC.;REEL/FRAME:031043/0281

Effective date: 20130701

AS Assignment

Owner name: SURGIENT, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY, EFFECTIVE 08/09/2010;ASSIGNOR:ESCALATE CAPITAL I, L.P.;REEL/FRAME:031694/0705

Effective date: 20131120

Owner name: SURGIENT, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY, EFFECTIVE 08/10/2010;ASSIGNOR:SQUARE 1 BANK;REEL/FRAME:031694/0688

Effective date: 20131113

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (ABL);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031898/0001

Effective date: 20131029

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT (TERM LOAN);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031899/0261

Effective date: 20131029

Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FIRST LIEN COLLATERAL AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;BOOMI, INC.;AND OTHERS;REEL/FRAME:031897/0348

Effective date: 20131029

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, TE

Free format text: PATENT SECURITY AGREEMENT (ABL);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031898/0001

Effective date: 20131029

Owner name: BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS FI

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;BOOMI, INC.;AND OTHERS;REEL/FRAME:031897/0348

Effective date: 20131029

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT (TERM LOAN);ASSIGNORS:DELL INC.;APPASSURE SOFTWARE, INC.;ASAP SOFTWARE EXPRESS, INC.;AND OTHERS;REEL/FRAME:031899/0261

Effective date: 20131029

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: COMPELLANT TECHNOLOGIES, INC., MINNESOTA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: PEROT SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: SECUREWORKS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

Owner name: APPASSURE SOFTWARE, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:040065/0216

Effective date: 20160907

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS L.P.;DELL SOFTWARE INC.;REEL/FRAME:040039/0642

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS, L.P.;DELL SOFTWARE INC.;REEL/FRAME:040030/0187

Effective date: 20160907

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: APPASSURE SOFTWARE, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: PEROT SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: COMPELLENT TECHNOLOGIES, INC., MINNESOTA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: SECUREWORKS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:040040/0001

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS, L.P.;DELL SOFTWARE INC.;REEL/FRAME:040030/0187

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A

Free format text: SECURITY AGREEMENT;ASSIGNORS:AVENTAIL LLC;DELL PRODUCTS L.P.;DELL SOFTWARE INC.;REEL/FRAME:040039/0642

Effective date: 20160907

Owner name: SECUREWORKS, INC., GEORGIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: APPASSURE SOFTWARE, INC., VIRGINIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: PEROT SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: COMPELLENT TECHNOLOGIES, INC., MINNESOTA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: DELL INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:040065/0618

Effective date: 20160907

AS Assignment

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:040521/0467

Effective date: 20161031

Owner name: DELL PRODUCTS, L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:040521/0467

Effective date: 20161031

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:040521/0016

Effective date: 20161031

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:040521/0016

Effective date: 20161031

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST IN CERTAIN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040039/0642);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A.;REEL/FRAME:040521/0016

Effective date: 20161031

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:040521/0467

Effective date: 20161031

AS Assignment

Owner name: QUEST SOFTWARE INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:DELL SOFTWARE INC.;REEL/FRAME:044800/0848

Effective date: 20161101

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0486

Effective date: 20180518

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NEW YORK

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0347

Effective date: 20180518

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0486

Effective date: 20180518

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: FIRST LIEN PATENT SECURITY AGREEMENT;ASSIGNOR:QUEST SOFTWARE INC.;REEL/FRAME:046327/0347

Effective date: 20180518

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

AS Assignment

Owner name: QUEST SOFTWARE INC., CALIFORNIA

Free format text: RELEASE OF FIRST LIEN SECURITY INTEREST IN PATENTS;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:059105/0479

Effective date: 20220201

Owner name: QUEST SOFTWARE INC., CALIFORNIA

Free format text: RELEASE OF SECOND LIEN SECURITY INTEREST IN PATENTS;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT;REEL/FRAME:059096/0683

Effective date: 20220201

Owner name: GOLDMAN SACHS BANK USA, NEW YORK

Free format text: FIRST LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:QUEST SOFTWARE INC.;ANALYTIX DATA SERVICES INC.;BINARYTREE.COM LLC;AND OTHERS;REEL/FRAME:058945/0778

Effective date: 20220201

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., MARYLAND

Free format text: SECOND LIEN INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNORS:QUEST SOFTWARE INC.;ANALYTIX DATA SERVICES INC.;BINARYTREE.COM LLC;AND OTHERS;REEL/FRAME:058952/0279

Effective date: 20220201