US20060248371A1 - Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center - Google Patents

Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center Download PDF

Info

Publication number
US20060248371A1
US20060248371A1 US11/117,137 US11713705A US2006248371A1 US 20060248371 A1 US20060248371 A1 US 20060248371A1 US 11713705 A US11713705 A US 11713705A US 2006248371 A1 US2006248371 A1 US 2006248371A1
Authority
US
United States
Prior art keywords
cluster
resource
domain
node
operations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/117,137
Inventor
Ming Chen
Thomas Juergen Lumpp
Markus Mueller
Juergen Peter Schneider
Andrew Neil Trossman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/117,137 priority Critical patent/US20060248371A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LUMPP, THOMAS JUERGEN, SCHNEIDER, JUERGEN PETER, CHEN, MING, TROSSMAN, ANDREW NEIL, MUELLER, MARKUS
Publication of US20060248371A1 publication Critical patent/US20060248371A1/en
Priority to US12/622,297 priority patent/US8843561B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/505Clust

Definitions

  • the present invention relates to a data processing system.
  • the present invention relates to modeling clustering technologies in a data center.
  • the present invention relates to providing a common cluster model for configuring, managing, and operating different clustering technologies in a data center.
  • clustering technologies may be used to configure, manage, and operate clusters.
  • these clustering technologies include VeritasTM clustering server, which is available from Veritas Software Corporation, Tivoli® System Automation (TSA) for Multiplatforms and WebSphere, which is available from International Business Machines Corporation, and Microsoft® cluster server available from Microsoft Corporation.
  • VeritasTM clustering server which is available from Veritas Software Corporation
  • Tivoli® System Automation (TSA) for Multiplatforms and WebSphere which is available from International Business Machines Corporation
  • Microsoft® cluster server available from Microsoft Corporation.
  • Veritas clustering server provides a configuration definition template for configuring a Veritas cluster.
  • TSA cluster uses a pre-camp policy and a CLI cluster command to configure the TSA cluster.
  • Microsoft cluster server uses configuration user interface to configure the cluster and encapsulate all detail configuration steps from the users.
  • the WebSphere cluster uses a deployment manager server to configure a WebSphere Application Server (WAS) cluster by calling a WAS internal application programming interface (API).
  • WAS WebSphere Application Server
  • API application programming interface
  • these clustering technologies each have their own data model for modeling the cluster.
  • users When encountering different clustering technologies, users have to have knowledge of different data models associated with each of these clustering technologies. For example, if both a WAS cluster and a TSA high availability cluster are used in a data center, a user must handle these two clustering technologies and their data models separately.
  • the present invention provides a method, an apparatus, and computer instructions for a common cluster model for configuring, managing, and operating different clustering technologies in a data center.
  • the common cluster model defines a cluster domain that includes at least one cluster node for a clustering technology, associates at least one cluster resource with the cluster domain or the at least one cluster node, and associates at least one property with the cluster domain or the at least one cluster resource.
  • FIG. 1 depicts a pictorial representation of a network of data processing systems in which an illustrative embodiment of the present invention may be implemented
  • FIG. 2 is a block diagram of a data processing system that may be implemented as a server, in accordance with an illustrative embodiment of the present invention
  • FIG. 3 is a block diagram of a data processing system in which an illustrative embodiment of the present invention may be implemented
  • FIG. 4 is a diagram illustrating an exemplary data center, in accordance with an illustrative embodiment of the present invention.
  • FIG. 5 is a diagram illustrating a known method to configure, manage, and operate a cluster
  • FIG. 6 is a diagram illustrating known relationships between different clustering technologies
  • FIG. 7 is a diagram illustrating relationships between administrators and different clustering technologies in accordance with an illustrative embodiment of the present invention.
  • FIG. 8 is a diagram illustrating an exemplary common cluster domain model in accordance with an illustrative embodiment of the present invention.
  • FIG. 9 is a diagram illustrating exemplary cluster domain logical device operations and cluster resource logical device operations in accordance with an illustrative embodiment of the present invention.
  • FIG. 10 is a diagram illustrating exemplary usage of common cluster model in FIG. 8 to model a high availability cluster in accordance with an illustrative embodiment of the present invention.
  • FIG. 1 depicts a pictorial representation of a network of data processing systems in which an illustrative embodiment of the present invention may be implemented.
  • Network data processing system 100 is a network of computers in which the present invention may be implemented.
  • Network data processing system 100 contains a network 102 , which is the medium used to provide communications links between various devices and computers connected together within network data processing system 100 .
  • Network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.
  • server 104 is connected to network 102 along with storage unit 106 .
  • clients 108 , 110 , and 112 are connected to network 102 .
  • These clients 108 , 110 , and 112 may be, for example, personal computers or network computers.
  • server 104 provides data, such as boot files, operating system images, and applications to clients 108 - 112 .
  • Clients 108 , 110 , and 112 are clients to server 104 .
  • Network data processing system 100 may include additional servers, clients, and other devices not shown.
  • network data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, government, educational and other computer systems that route data and messages.
  • network data processing system 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN).
  • FIG. 1 is intended as an example, and not as an architectural limitation for the present invention.
  • Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors 202 and 204 connected to system bus 206 . Alternatively, a single processor system may be employed. Also connected to system bus 206 is memory controller/cache 208 , which provides an interface to local memory 209 . I/O Bus Bridge 210 is connected to system bus 206 and provides an interface to I/O bus 212 . Memory controller/cache 208 and I/O Bus Bridge 210 may be integrated as depicted.
  • SMP symmetric multiprocessor
  • Peripheral component interconnect (PCI) bus bridge 214 connected to I/O bus 212 provides an interface to PCI local bus 216 .
  • PCI Peripheral component interconnect
  • a number of modems may be connected to PCI local bus 216 .
  • Typical PCI bus implementations will support four PCI expansion slots or add-in connectors.
  • Communications links to clients 108 - 112 in FIG. 1 may be provided through modem 218 and network adapter 220 connected to PCI local bus 216 through add-in connectors.
  • Additional PCI bus bridges 222 and 224 provide interfaces for additional PCI local buses 226 and 228 , from which additional modems or network adapters may be supported. In this manner, data processing system 200 allows connections to multiple network computers.
  • a memory-mapped graphics adapter 230 and hard disk 232 may also be connected to I/O bus 212 as depicted, either directly or indirectly.
  • FIG. 2 may vary.
  • other peripheral devices such as optical disk drives and the like, also may be used in addition to or in place of the hardware depicted.
  • the depicted example is not meant to imply architectural limitations with respect to the present invention.
  • the data processing system depicted in FIG. 2 may be, for example, an IBM eServer pseries system, a product of International Business Machines Corporation in Armonk, N.Y., running the Advanced Interactive Executive (AIX) operating system or the LINUX operating system.
  • AIX Advanced Interactive Executive
  • Data processing system 300 is an example of a client computer.
  • Data processing system 300 employs a peripheral component interconnect (PCI) local bus architecture.
  • PCI peripheral component interconnect
  • AGP Accelerated Graphics Port
  • ISA Industry Standard Architecture
  • Processor 302 and main memory 304 are connected to PCI local bus 306 through PCI Bridge 308 .
  • PCI Bridge 308 also may include an integrated memory controller and cache memory for processor 302 . Additional connections to PCI local bus 306 may be made through direct component interconnection or through add-in boards.
  • local area network (LAN) adapter 310 small computer system interface (SCSI) host bus adapter 312 , and expansion bus interface 314 are connected to PCI local bus 306 by direct component connection.
  • audio adapter 316 graphics adapter 318 , and audio/video adapter 319 are connected to PCI local bus 306 by add-in boards inserted into expansion slots.
  • Expansion bus interface 314 provides a connection for a keyboard and mouse adapter 320 , modem 322 , and additional memory 324 .
  • SCSI host bus adapter 312 provides a connection for hard disk drive 326 , tape drive 328 , and CD-ROM drive 330 .
  • Typical PCI local bus implementations will support three or four PCI expansion slots or add-in connectors.
  • An operating system runs on processor 302 and is used to coordinate and provide control of various components within data processing system 300 in FIG. 3 .
  • the operating system may be a commercially available operating system, such as Windows xP, which is available from Microsoft Corporation.
  • An object oriented programming system such as Java may run in conjunction with the operating system and provide calls to the operating system from Java programs or applications executing on data processing system 300 . “Java” is a trademark of Sun Microsystems, Inc. Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as hard disk drive 326 , and may be loaded into main memory 304 for execution by processor 302 .
  • FIG. 3 may vary depending on the implementation.
  • Other internal hardware or peripheral devices such as flash read-only memory (ROM), equivalent nonvolatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIG. 3 .
  • the processes of the present invention may be applied to a multiprocessor data processing system.
  • data processing system 300 may be a stand-alone system configured to be bootable without relying on some type of network communication interfaces
  • data processing system 300 may be a personal digital assistant (PDA) device, which is configured with ROM and/or flash ROM in order to provide non-volatile memory for storing operating system files and/or user-generated data.
  • PDA personal digital assistant
  • data processing system 300 also may be a notebook computer or hand held computer in addition to taking the form of a PDA.
  • data processing system 300 also may be a kiosk or a Web appliance.
  • data center 400 includes resources, such as, customer 402 , server 404 , Virtual Local Area Network (VLAN) 406 , subnet 408 , router 410 , switch 412 , software products 416 , load balancer 418 , and data container 420 .
  • resources such as, customer 402 , server 404 , Virtual Local Area Network (VLAN) 406 , subnet 408 , router 410 , switch 412 , software products 416 , load balancer 418 , and data container 420 .
  • VLAN Virtual Local Area Network
  • Customer 402 may be, for example, a client or an administrator who uses a data processing system, such as data processing system 300 in FIG. 3 .
  • Server 404 may be implemented as a data processing system, such as data processing system 200 in FIG. 2 .
  • Server 404 may also be implemented as an application server, which hosts Web services, or other types of servers.
  • Router 410 and switch 412 facilitate communications between different devices.
  • VLAN 406 is a network of computers that behave as if they are connected to the same wire even though they may actually be physically located on different segments of a local area network.
  • Subnet 408 is a portion of a network, which may be a physically independent network segment and shares a network address with other portions of the network.
  • Software products 416 are applications that may be deployed to a client or a server. Load balancer 418 spreads traffic among multiple systems such that no single system is overwhelmed. Load balancer 418 is normally implemented as software running on a data processing system.
  • Data container 420 may be a database, such as DB2 Universal Database, a product available from International Business Machines Corporation.
  • Data center 400 is presented for purposes of illustrating the present invention.
  • Other resources such as, for example, cluster of servers and switch port, also may be included in data center 400 .
  • the mechanism of the present invention provides a common cluster model for configuring, managing and operating clusters, such as clusters of servers 404 , in a data centers, such as data center 400 .
  • the processes of the present invention may be performed by a processing unit comprising one or more processors, such as processor 302 in FIG. 3 , using computer implemented instructions, which may be located in a memory such as, for example, main memory 304 , memory 324 , or in one or more peripheral devices 326 and 330 .
  • FIG. 5 a diagram illustrating a known method to configure, manage, and operate a cluster is depicted.
  • administrator 500 configures, manages, and operates different clustering technologies in different ways. For example, administrator configures Veritas cluster 502 and its resources using Veritas configuration files and agents.
  • administrator 500 configures Microsoft cluster 504 and its resources using MS cluster configuration user interface or Windows scripts. Similarly, administrator 500 uses WebSphere API to configure WAS cluster 506 . In yet another example, administrator 500 configures TSA HA cluster 508 and its pre-camp policy using TSA specific command, such as CLI.
  • FIG. 6 a diagram illustrating known relationships between different clustering technologies is depicted.
  • administrator 600 currently has to handle different clustering technologies 602 individually, since clustering technologies 602 are incompatible with one another. Therefore, a need exists for a common model that handles different clustering technologies.
  • the present invention provides such common model, known as common cluster model, which configures, manages, and operates a cluster regardless of which clustering technology the cluster employs.
  • the common cluster model allows a user to interact with the vendor specific clustering technology without knowledge of the specific clustering technology or its data model.
  • the common cluster model is a common interface between the user and a plurality of logical device operations.
  • the logical device operations map the common cluster model into specific device driver workflows which interact with various vendor specific clustering technologies.
  • the common cluster model provides a higher level of abstraction for cluster administrators to configure, manage, and operate the data center where different clustering technologies are applied.
  • FIG. 7 a diagram illustrating relationships between administrators and different clustering technologies is depicted in accordance with an illustrative embodiment of the present invention.
  • administrator 700 is different from administrator 500 and 600 in FIGS. 5 and 6 in that administrator 700 interacts with common cluster model 702 instead of vendor specific clustering technologies, such as Veritas cluster 704 , WAS cluster 706 , Microsoft cluster 708 , and TSA HA cluster 710 .
  • vendor specific clustering technologies such as Veritas cluster 704 , WAS cluster 706 , Microsoft cluster 708 , and TSA HA cluster 710 .
  • administrator 700 may invoke logical device operations 712 via graphical user interface 714 .
  • Logical device operations 712 allow administrator 700 to configure, manage, and operate vendor specific clustering technologies 704 - 710 using device driver workflows 716 .
  • Examples of operations administrator 700 may perform via common cluster model 702 include configuring a cluster, adding node to a cluster, removing a node from a cluster, starting or stopping a cluster, starting or stopping a cluster node, creating and deleting a cluster resource, creating and deleting a resource group, synchronizing cluster or resource states, starting or stopping a cluster resource, updating a resource, and creating or updating resource dependencies.
  • common cluster model 702 models two main types of cluster domains: management server domain and peer domain.
  • a cluster domain is a virtual collection of physical or logical elements that provide services to a client as a single unit.
  • Peer domain has no management node that manages all nodes in the cluster domain. Thus, each node in the domain is a peer and can access each other. Peer domain nodes are identical and there are no rankings between the nodes. Cluster operations may be run on any one of the nodes in the peer cluster domain. Examples of a peer domain include TSA, Veritas, and High Availability Cluster Multiprocessing for AIX 5L (HACMP) available from International Business Machines Corporation.
  • HACMP High Availability Cluster Multiprocessing for AIX 5L
  • Management cluster domain has one or more management nodes that manage peer nodes in the management server cluster domain. Peer nodes in the management cluster domain cannot access each other. All cluster operations are run by the management nodes. Examples of management server domain include WebSphere cluster and an open source project known as Extreme Cluster Administration Toolkit (xCat).
  • cluster domain 802 includes logical elements known as cluster nodes, represented by Dcm_object 804 . There may be one or more cluster node members in cluster domain 802 . An application tier may specify associated cluster domain 802 used. Device driver or individual workflows may be associated with cluster domain 802 and software models may be associated with cluster domain 802 for configuring the cluster.
  • Cluster nodes represented by Dcm_object 804 , are logical representations of servers, such as H/A clusters, or software instances, such as WAS cluster, since a single system may be running two software instances simultaneously.
  • Cluster node types 803 may be either a peer node or a management node if the cluster domain is management server domain.
  • Common cluster model 800 provides the flexibility to support both types of clusters. For each cluster node, desired state 807 may be specified in order to determine what state cluster domain 802 should be after it is configured, for example, online. In addition, observed state 805 may also be specified, which indicates the current state of cluster domain 802 , for example, offline.
  • cluster nodes in cluster domain 802 may be located in different physical systems, for example, a WAS cluster, or on a single system, for example, WAS 2-server instances running on a single machine.
  • cluster domain 802 may be nested 806 , meaning that a cluster domain may include one or more sub-cluster domains. However, a sub-cluster domain may only have exactly one parent cluster domain.
  • cluster domain 802 In order to configure, manage, and operate cluster domain 802 , a set of cluster service operations 808 may be used. For example, config operation 810 may be used to lookup nodes belonging to a cluster domain and its existing resources and then define those resources. For each cluster domain 802 , domain level cluster resources 812 and cluster domain properties 814 may be defined. These illustrative cluster service operations are also referred to as cluster domain operations.
  • cluster resource 812 is a specialized software resource 816 .
  • Cluster resource 812 may be either a domain level cluster resource or node level cluster resource.
  • Cluster resource 812 includes cluster resource type 818 , desired state, observed state, and display name.
  • Cluster resource type 818 may be an application, an IP, a share disk, etc.
  • An example of an application is ipconfig.
  • Start/stop dependencies or relationships 820 may also be associated among cluster resources 812 .
  • a dependency may be associated to resource A and resource B, which defines that resource A starts before resource B.
  • resource dependencies include storage mappings and database instances.
  • Each cluster resource 816 may include one or more resource attributes, represented by Dcm-properties 814 . Examples of resource attributes include ip addresses, name masks, etc.
  • cluster resource 812 may be a member of resource group 822 or may be aggregated to form failover cluster resource 824 , which consists of two or more redundancy of cluster resource 812 , for a high availability cluster or a peer domain cluster.
  • resource group 822 may also associate with device driver or individual workflows.
  • a set of cluster resource operations 826 may be used to configure, manage or operate cluster resource 812 .
  • create dependency operation 828 may be used to create relationship 820 between two cluster resources.
  • Resource group 822 may be nested 830 , meaning that a resource group may contain another resource group. However, each subgroup may only have exactly one parent group. Each resource group 822 may define group attributes and group dependencies among different cluster resource groups. These group attributes may be referred to as resource group properties. Each resource group 822 includes one or more cluster resources 812 . These resources may be located in the same system or different systems. In addition, resource group 822 may also associate device driver and individual workflows. Furthermore, a set of cluster resource group logical operations can be used to operate resource group 822 .
  • common cluster model 800 In order to map common cluster model 800 into a vendor specific data model, logical device operations, such as logical device operations 712 in FIG. 7 , are used. As described above, common cluster model 800 provides cluster service operations 808 to map between common cluster domain 802 and vendor specific domain model. For each vendor specific clustering technology, a workflow implementation is provided and associated with logical device operations 712 in FIG. 7 , and a mapping between common cluster model 800 and vendor specific domain mode is generated automatically.
  • cluster service operations 808 when config operation 810 in cluster service operations 808 is invoked, logical device operations 712 in FIG. 7 are invoked to parse a resource template to obtain vendor specific attributes for a cluster resource, and populate dcm_properties 814 for a cluster resource 812 in common cluster model 800 .
  • administrators may use cluster service operations 808 to configure, manage, and operate cluster domains and its associated resources.
  • cluster domain logical device operations 900 are provided by the present invention to configure, manage, and operate cluster domains.
  • Cluster domain logical operations 900 include Config 902 for configuring a cluster domain, AddNode 904 that adds a cluster node to a cluster domain, Start 906 to start a cluster domain, Stop 908 to stop a cluster domain, StartNode 910 to start a node in a cluster domain, StopNode 912 to stop a node in a cluster domain, Remove 914 to remove a cluster domain, RemoveNode 916 to remove a node in a cluster domain, CreateResource 918 to create a cluster resource in a cluster domain, CreateResourceGroup 920 to create a resource group in a cluster domain, UpdateDomainStatus 922 to update a cluster domain observed state, and UpdateNodeStatus 924 to update a cluster node's observed state.
  • These illustrative cluster domain logical operations may also be referred to as cluster domain operations.
  • cluster resource logical device operations 930 are provided by the present invention to configure, manage, and operate cluster resources within a cluster domain.
  • Cluster resource logical device operations 930 include Start 932 to start a cluster resource, Stop 934 to stop a cluster resource, AddGroupMember 936 to add resources to a resource group, CreateDependency 938 to create dependencies between resources, Update 940 to update a cluster resource's attributes, and RemoveResource 942 to remove a cluster resource.
  • These illustrative cluster resource logical operations may also be referred to as cluster resource operations.
  • FIG. 10 a diagram illustrating exemplary usage of common cluster model 800 in FIG. 8 to model a high availability cluster is depicted in accordance with an illustrative embodiment of the present invention.
  • TSA DB/2 H/A cluster domain 1002 may be modeled as cluster domain 802 in FIG. 8 .
  • TSA DB/2 H/A cluster domain 1002 includes two cluster nodes: DB/2 Node 1 1004 and DB/2 Node 2 1006 , which may be modeled using Dcm_object 804 in FIG. 8 .
  • DB/2 Node 1 1004 includes a number of resources accessible by a user. For example, nts server resources 1008 , nts server IP 1010 , DB/2 instance 1012 , DB/2 Instance Mount Point 1014 , and DB/2 Instance IP 1016 . These resources are currently online.
  • a dependency may be defined between two resources. For example, DB/2 instance 1012 may depend on DB/2 Instance Mount Point 1014 . Thus, DB2 instance 1012 may not start until after DB/2 Instance Mount Point 1014 is completed. This dependency may be modeled by using relationship 820 in FIG. 8 .
  • DB/2 Node 2 1006 also includes a number of resources, including DB/2 Instance IP 1018 , DB/2 Instance Mount Point 1020 , DB/2 Instance 1022 , nts server IP 1024 , and nts server resources 1026 .
  • these resources are offline or on standby.
  • cluster domain 900 and cluster resource logical device operations 930 in FIG. 9 administrator may use easily change the configuration of DB/2 Node 1 1004 and DB/2 Node 2 1006 to place these resources online.
  • resources from DB/2 Node 1 1004 and DB/2 Node 2 1006 may be grouped into a number of resource groups and placed online.
  • nts server resources 1008 and nts server IP 1010 from DB/2 Node 1 1004 and nts server IP 1024 and nts server resources 1026 from DB/2 Node 2 1006 may be grouped into nts resource group 1030
  • DB/2 instance 1012 , DB/2 Instance Mount Point 1014 , DB/2 Instance IP 1016 from DB/2 Node 1 1004 and DB/2 Instance IP 1018 , DB/2 Instance Mount Point 1020 , and DB/2 Instance 1022 from DB/2 Node 2 1006 may be grouped into DB/2 resource group 1032 .
  • Nts resource group 1030 and DB/2 resource group 1032 may then be placed online using cluster resource logical device operations.
  • a common cluster model is provided by the present invention that understands vendor specific clustering technologies and its data model. Administrators may use the common cluster model and its logical device operations to configure, manage, and operate clusters, cluster nodes, and their resources without the knowledge of vendor specific clustering technologies. Administrators are no longer required to learn each vendor specific data model and operations in order to configure vendor specific clustering technologies. As a result, administrators may save significant time and efforts in configuring, managing, and operating clusters.

Abstract

A method, apparatus, and computer instructions are provided for a common cluster model for configuring, managing, and operating different clustering technologies in a data center. The common cluster model supports peer cluster domains and management server cluster domains. Each cluster domain may have one or more cluster nodes. For each cluster domains, one or more cluster resources may be defined. These resources may depend on one another and may be grouped into a resource group. A set of cluster domain and cluster resources logical device operations are provided to configure, manage, and operate cluster domains and its associated resources.

Description

    BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • The present invention relates to a data processing system. In particular, the present invention relates to modeling clustering technologies in a data center. Still more particularly, the present invention relates to providing a common cluster model for configuring, managing, and operating different clustering technologies in a data center.
  • 2. Description of Related Art
  • In a data center, a variety of clustering technologies may be used to configure, manage, and operate clusters. Examples of these clustering technologies include Veritas™ clustering server, which is available from Veritas Software Corporation, Tivoli® System Automation (TSA) for Multiplatforms and WebSphere, which is available from International Business Machines Corporation, and Microsoft® cluster server available from Microsoft Corporation.
  • These clustering technologies each have their own way to configure, manage, and operate clusters. For example, Veritas clustering server provides a configuration definition template for configuring a Veritas cluster. TSA cluster uses a pre-camp policy and a CLI cluster command to configure the TSA cluster. Microsoft cluster server uses configuration user interface to configure the cluster and encapsulate all detail configuration steps from the users. The WebSphere cluster uses a deployment manager server to configure a WebSphere Application Server (WAS) cluster by calling a WAS internal application programming interface (API). With the variety of ways to configure, manage, and operate cluster, no mechanism is present that allows users to easily interact with different clustering technologies.
  • In addition, these clustering technologies each have their own data model for modeling the cluster. When encountering different clustering technologies, users have to have knowledge of different data models associated with each of these clustering technologies. For example, if both a WAS cluster and a TSA high availability cluster are used in a data center, a user must handle these two clustering technologies and their data models separately.
  • Therefore, it would be advantageous to have an improved method that operates different clustering technologies and understands different data models.
  • SUMMARY OF THE INVENTION
  • The present invention provides a method, an apparatus, and computer instructions for a common cluster model for configuring, managing, and operating different clustering technologies in a data center. The common cluster model defines a cluster domain that includes at least one cluster node for a clustering technology, associates at least one cluster resource with the cluster domain or the at least one cluster node, and associates at least one property with the cluster domain or the at least one cluster resource.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
  • FIG. 1 depicts a pictorial representation of a network of data processing systems in which an illustrative embodiment of the present invention may be implemented;
  • FIG. 2 is a block diagram of a data processing system that may be implemented as a server, in accordance with an illustrative embodiment of the present invention;
  • FIG. 3 is a block diagram of a data processing system in which an illustrative embodiment of the present invention may be implemented;
  • FIG. 4 is a diagram illustrating an exemplary data center, in accordance with an illustrative embodiment of the present invention;
  • FIG. 5 is a diagram illustrating a known method to configure, manage, and operate a cluster;
  • FIG. 6 is a diagram illustrating known relationships between different clustering technologies;
  • FIG. 7 is a diagram illustrating relationships between administrators and different clustering technologies in accordance with an illustrative embodiment of the present invention;
  • FIG. 8 is a diagram illustrating an exemplary common cluster domain model in accordance with an illustrative embodiment of the present invention;
  • FIG. 9 is a diagram illustrating exemplary cluster domain logical device operations and cluster resource logical device operations in accordance with an illustrative embodiment of the present invention; and
  • FIG. 10 is a diagram illustrating exemplary usage of common cluster model in FIG. 8 to model a high availability cluster in accordance with an illustrative embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • With reference now to the figures, FIG. 1 depicts a pictorial representation of a network of data processing systems in which an illustrative embodiment of the present invention may be implemented. Network data processing system 100 is a network of computers in which the present invention may be implemented. Network data processing system 100 contains a network 102, which is the medium used to provide communications links between various devices and computers connected together within network data processing system 100. Network 102 may include connections, such as wire, wireless communication links, or fiber optic cables.
  • In the depicted example, server 104 is connected to network 102 along with storage unit 106. In addition, clients 108, 110, and 112 are connected to network 102. These clients 108, 110, and 112 may be, for example, personal computers or network computers. In the depicted example, server 104 provides data, such as boot files, operating system images, and applications to clients 108-112. Clients 108, 110, and 112 are clients to server 104. Network data processing system 100 may include additional servers, clients, and other devices not shown. In the depicted example, network data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the Transmission Control Protocol/Internet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, government, educational and other computer systems that route data and messages. Of course, network data processing system 100 also may be implemented as a number of different types of networks, such as for example, an intranet, a local area network (LAN), or a wide area network (WAN). FIG. 1 is intended as an example, and not as an architectural limitation for the present invention.
  • Referring to FIG. 2, a block diagram of a data processing system that may be implemented as a server, such as server 104 in FIG. 1, is depicted in accordance with a preferred embodiment of the present invention. Data processing system 200 may be a symmetric multiprocessor (SMP) system including a plurality of processors 202 and 204 connected to system bus 206. Alternatively, a single processor system may be employed. Also connected to system bus 206 is memory controller/cache 208, which provides an interface to local memory 209. I/O Bus Bridge 210 is connected to system bus 206 and provides an interface to I/O bus 212. Memory controller/cache 208 and I/O Bus Bridge 210 may be integrated as depicted.
  • Peripheral component interconnect (PCI) bus bridge 214 connected to I/O bus 212 provides an interface to PCI local bus 216. A number of modems may be connected to PCI local bus 216. Typical PCI bus implementations will support four PCI expansion slots or add-in connectors. Communications links to clients 108-112 in FIG. 1 may be provided through modem 218 and network adapter 220 connected to PCI local bus 216 through add-in connectors.
  • Additional PCI bus bridges 222 and 224 provide interfaces for additional PCI local buses 226 and 228, from which additional modems or network adapters may be supported. In this manner, data processing system 200 allows connections to multiple network computers. A memory-mapped graphics adapter 230 and hard disk 232 may also be connected to I/O bus 212 as depicted, either directly or indirectly.
  • Those of ordinary skill in the art will appreciate that the hardware depicted in FIG. 2 may vary. For example, other peripheral devices, such as optical disk drives and the like, also may be used in addition to or in place of the hardware depicted. The depicted example is not meant to imply architectural limitations with respect to the present invention.
  • The data processing system depicted in FIG. 2 may be, for example, an IBM eServer pseries system, a product of International Business Machines Corporation in Armonk, N.Y., running the Advanced Interactive Executive (AIX) operating system or the LINUX operating system.
  • With reference now to FIG. 3, a block diagram illustrating a data processing system is depicted in which the present invention may be implemented. Data processing system 300 is an example of a client computer. Data processing system 300 employs a peripheral component interconnect (PCI) local bus architecture. Although the depicted example employs a PCI bus, other bus architectures such as Accelerated Graphics Port (AGP) and Industry Standard Architecture (ISA) may be used. Processor 302 and main memory 304 are connected to PCI local bus 306 through PCI Bridge 308. PCI Bridge 308 also may include an integrated memory controller and cache memory for processor 302. Additional connections to PCI local bus 306 may be made through direct component interconnection or through add-in boards. In the depicted example, local area network (LAN) adapter 310, small computer system interface (SCSI) host bus adapter 312, and expansion bus interface 314 are connected to PCI local bus 306 by direct component connection. In contrast, audio adapter 316, graphics adapter 318, and audio/video adapter 319 are connected to PCI local bus 306 by add-in boards inserted into expansion slots. Expansion bus interface 314 provides a connection for a keyboard and mouse adapter 320, modem 322, and additional memory 324. SCSI host bus adapter 312 provides a connection for hard disk drive 326, tape drive 328, and CD-ROM drive 330. Typical PCI local bus implementations will support three or four PCI expansion slots or add-in connectors.
  • An operating system runs on processor 302 and is used to coordinate and provide control of various components within data processing system 300 in FIG. 3. The operating system may be a commercially available operating system, such as Windows xP, which is available from Microsoft Corporation. An object oriented programming system such as Java may run in conjunction with the operating system and provide calls to the operating system from Java programs or applications executing on data processing system 300. “Java” is a trademark of Sun Microsystems, Inc. Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as hard disk drive 326, and may be loaded into main memory 304 for execution by processor 302.
  • Those of ordinary skill in the art will appreciate that the hardware in FIG. 3 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash read-only memory (ROM), equivalent nonvolatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in FIG. 3. Also, the processes of the present invention may be applied to a multiprocessor data processing system.
  • As another example, data processing system 300 may be a stand-alone system configured to be bootable without relying on some type of network communication interfaces As a further example, data processing system 300 may be a personal digital assistant (PDA) device, which is configured with ROM and/or flash ROM in order to provide non-volatile memory for storing operating system files and/or user-generated data.
  • The depicted example in FIG. 3 and above-described examples are not meant to imply architectural limitations. For example, data processing system 300 also may be a notebook computer or hand held computer in addition to taking the form of a PDA. Data processing system 300 also may be a kiosk or a Web appliance.
  • Turning now to FIG. 4, a diagram illustrating an exemplary data center is depicted, in accordance with a preferred embodiment of the present invention. As shown in FIG. 4, in this illustrative example, data center 400 includes resources, such as, customer 402, server 404, Virtual Local Area Network (VLAN) 406, subnet 408, router 410, switch 412, software products 416, load balancer 418, and data container 420.
  • Customer 402 may be, for example, a client or an administrator who uses a data processing system, such as data processing system 300 in FIG. 3. Server 404 may be implemented as a data processing system, such as data processing system 200 in FIG. 2. Server 404 may also be implemented as an application server, which hosts Web services, or other types of servers. Router 410 and switch 412 facilitate communications between different devices. VLAN 406 is a network of computers that behave as if they are connected to the same wire even though they may actually be physically located on different segments of a local area network. Subnet 408 is a portion of a network, which may be a physically independent network segment and shares a network address with other portions of the network.
  • Software products 416 are applications that may be deployed to a client or a server. Load balancer 418 spreads traffic among multiple systems such that no single system is overwhelmed. Load balancer 418 is normally implemented as software running on a data processing system. Data container 420 may be a database, such as DB2 Universal Database, a product available from International Business Machines Corporation.
  • Data center 400, as depicted in FIG. 4, is presented for purposes of illustrating the present invention. Other resources, such as, for example, cluster of servers and switch port, also may be included in data center 400. The mechanism of the present invention provides a common cluster model for configuring, managing and operating clusters, such as clusters of servers 404, in a data centers, such as data center 400. The processes of the present invention may be performed by a processing unit comprising one or more processors, such as processor 302 in FIG. 3, using computer implemented instructions, which may be located in a memory such as, for example, main memory 304, memory 324, or in one or more peripheral devices 326 and 330.
  • Turning now to FIG. 5, a diagram illustrating a known method to configure, manage, and operate a cluster is depicted. As shown in FIG. 5, currently, administrator 500 configures, manages, and operates different clustering technologies in different ways. For example, administrator configures Veritas cluster 502 and its resources using Veritas configuration files and agents.
  • In another example, administrator 500 configures Microsoft cluster 504 and its resources using MS cluster configuration user interface or Windows scripts. Similarly, administrator 500 uses WebSphere API to configure WAS cluster 506. In yet another example, administrator 500 configures TSA HA cluster 508 and its pre-camp policy using TSA specific command, such as CLI.
  • Turning now to FIG. 6, a diagram illustrating known relationships between different clustering technologies is depicted. As shown in FIG. 6, administrator 600 currently has to handle different clustering technologies 602 individually, since clustering technologies 602 are incompatible with one another. Therefore, a need exists for a common model that handles different clustering technologies.
  • In an illustrative embodiment, the present invention provides such common model, known as common cluster model, which configures, manages, and operates a cluster regardless of which clustering technology the cluster employs. The common cluster model allows a user to interact with the vendor specific clustering technology without knowledge of the specific clustering technology or its data model.
  • The common cluster model is a common interface between the user and a plurality of logical device operations. The logical device operations map the common cluster model into specific device driver workflows which interact with various vendor specific clustering technologies. Thus, the common cluster model provides a higher level of abstraction for cluster administrators to configure, manage, and operate the data center where different clustering technologies are applied.
  • Turning now to FIG. 7, a diagram illustrating relationships between administrators and different clustering technologies is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 7, administrator 700 is different from administrator 500 and 600 in FIGS. 5 and 6 in that administrator 700 interacts with common cluster model 702 instead of vendor specific clustering technologies, such as Veritas cluster 704, WAS cluster 706, Microsoft cluster 708, and TSA HA cluster 710.
  • With common cluster model 702 provided by the present invention, administrator 700 may invoke logical device operations 712 via graphical user interface 714. Logical device operations 712 allow administrator 700 to configure, manage, and operate vendor specific clustering technologies 704-710 using device driver workflows 716.
  • Examples of operations administrator 700 may perform via common cluster model 702 include configuring a cluster, adding node to a cluster, removing a node from a cluster, starting or stopping a cluster, starting or stopping a cluster node, creating and deleting a cluster resource, creating and deleting a resource group, synchronizing cluster or resource states, starting or stopping a cluster resource, updating a resource, and creating or updating resource dependencies.
  • In an illustrative embodiment, common cluster model 702 models two main types of cluster domains: management server domain and peer domain. A cluster domain is a virtual collection of physical or logical elements that provide services to a client as a single unit.
  • Peer domain has no management node that manages all nodes in the cluster domain. Thus, each node in the domain is a peer and can access each other. Peer domain nodes are identical and there are no rankings between the nodes. Cluster operations may be run on any one of the nodes in the peer cluster domain. Examples of a peer domain include TSA, Veritas, and High Availability Cluster Multiprocessing for AIX 5L (HACMP) available from International Business Machines Corporation.
  • Management cluster domain, on the other hand, has one or more management nodes that manage peer nodes in the management server cluster domain. Peer nodes in the management cluster domain cannot access each other. All cluster operations are run by the management nodes. Examples of management server domain include WebSphere cluster and an open source project known as Extreme Cluster Administration Toolkit (xCat).
  • Turning now to FIG. 8, a diagram illustrating an exemplary common cluster domain model is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 8, common cluster model 800 is able to handle both types of cluster domains. For example, cluster domain 802 includes logical elements known as cluster nodes, represented by Dcm_object 804. There may be one or more cluster node members in cluster domain 802. An application tier may specify associated cluster domain 802 used. Device driver or individual workflows may be associated with cluster domain 802 and software models may be associated with cluster domain 802 for configuring the cluster.
  • Cluster nodes, represented by Dcm_object 804, are logical representations of servers, such as H/A clusters, or software instances, such as WAS cluster, since a single system may be running two software instances simultaneously. Cluster node types 803 may be either a peer node or a management node if the cluster domain is management server domain. Common cluster model 800 provides the flexibility to support both types of clusters. For each cluster node, desired state 807 may be specified in order to determine what state cluster domain 802 should be after it is configured, for example, online. In addition, observed state 805 may also be specified, which indicates the current state of cluster domain 802, for example, offline.
  • In addition, cluster nodes in cluster domain 802 may be located in different physical systems, for example, a WAS cluster, or on a single system, for example, WAS 2-server instances running on a single machine. Furthermore, cluster domain 802 may be nested 806, meaning that a cluster domain may include one or more sub-cluster domains. However, a sub-cluster domain may only have exactly one parent cluster domain.
  • In order to configure, manage, and operate cluster domain 802, a set of cluster service operations 808 may be used. For example, config operation 810 may be used to lookup nodes belonging to a cluster domain and its existing resources and then define those resources. For each cluster domain 802, domain level cluster resources 812 and cluster domain properties 814 may be defined. These illustrative cluster service operations are also referred to as cluster domain operations.
  • In these illustrative examples, cluster resource 812 is a specialized software resource 816. Cluster resource 812 may be either a domain level cluster resource or node level cluster resource. Cluster resource 812 includes cluster resource type 818, desired state, observed state, and display name. Cluster resource type 818 may be an application, an IP, a share disk, etc. An example of an application is ipconfig.
  • Start/stop dependencies or relationships 820 may also be associated among cluster resources 812. For example, a dependency may be associated to resource A and resource B, which defines that resource A starts before resource B. Examples of resource dependencies include storage mappings and database instances. Each cluster resource 816 may include one or more resource attributes, represented by Dcm-properties 814. Examples of resource attributes include ip addresses, name masks, etc.
  • In addition, cluster resource 812 may be a member of resource group 822 or may be aggregated to form failover cluster resource 824, which consists of two or more redundancy of cluster resource 812, for a high availability cluster or a peer domain cluster. Just like cluster domain 802, resource group 822 may also associate with device driver or individual workflows. A set of cluster resource operations 826 may be used to configure, manage or operate cluster resource 812. For example, create dependency operation 828 may be used to create relationship 820 between two cluster resources.
  • Resource group 822 may be nested 830, meaning that a resource group may contain another resource group. However, each subgroup may only have exactly one parent group. Each resource group 822 may define group attributes and group dependencies among different cluster resource groups. These group attributes may be referred to as resource group properties. Each resource group 822 includes one or more cluster resources 812. These resources may be located in the same system or different systems. In addition, resource group 822 may also associate device driver and individual workflows. Furthermore, a set of cluster resource group logical operations can be used to operate resource group 822.
  • In order to map common cluster model 800 into a vendor specific data model, logical device operations, such as logical device operations 712 in FIG. 7, are used. As described above, common cluster model 800 provides cluster service operations 808 to map between common cluster domain 802 and vendor specific domain model. For each vendor specific clustering technology, a workflow implementation is provided and associated with logical device operations 712 in FIG. 7, and a mapping between common cluster model 800 and vendor specific domain mode is generated automatically.
  • For example, when config operation 810 in cluster service operations 808 is invoked, logical device operations 712 in FIG. 7 are invoked to parse a resource template to obtain vendor specific attributes for a cluster resource, and populate dcm_properties 814 for a cluster resource 812 in common cluster model 800. Thus, administrators may use cluster service operations 808 to configure, manage, and operate cluster domains and its associated resources.
  • Turning now to FIG. 9, a diagram illustrating exemplary cluster domain logical device operations and cluster resource logical device operations is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 9, cluster domain logical device operations 900 are provided by the present invention to configure, manage, and operate cluster domains. Cluster domain logical operations 900 include Config 902 for configuring a cluster domain, AddNode 904 that adds a cluster node to a cluster domain, Start 906 to start a cluster domain, Stop 908 to stop a cluster domain, StartNode 910 to start a node in a cluster domain, StopNode 912 to stop a node in a cluster domain, Remove 914 to remove a cluster domain, RemoveNode 916 to remove a node in a cluster domain, CreateResource 918 to create a cluster resource in a cluster domain, CreateResourceGroup 920 to create a resource group in a cluster domain, UpdateDomainStatus 922 to update a cluster domain observed state, and UpdateNodeStatus 924 to update a cluster node's observed state. These illustrative cluster domain logical operations may also be referred to as cluster domain operations.
  • Also shown in FIG. 9, cluster resource logical device operations 930 are provided by the present invention to configure, manage, and operate cluster resources within a cluster domain. Cluster resource logical device operations 930 include Start 932 to start a cluster resource, Stop 934 to stop a cluster resource, AddGroupMember 936 to add resources to a resource group, CreateDependency 938 to create dependencies between resources, Update 940 to update a cluster resource's attributes, and RemoveResource 942 to remove a cluster resource. These illustrative cluster resource logical operations may also be referred to as cluster resource operations.
  • Turning now to FIG. 10, a diagram illustrating exemplary usage of common cluster model 800 in FIG. 8 to model a high availability cluster is depicted in accordance with an illustrative embodiment of the present invention. As shown in FIG. 10, TSA DB/2 H/A cluster domain 1002 may be modeled as cluster domain 802 in FIG. 8.
  • TSA DB/2 H/A cluster domain 1002 includes two cluster nodes: DB/2 Node 1 1004 and DB/2 Node 2 1006, which may be modeled using Dcm_object 804 in FIG. 8. In this example, DB/2 Node 1 1004 includes a number of resources accessible by a user. For example, nts server resources 1008, nts server IP 1010, DB/2 instance 1012, DB/2 Instance Mount Point 1014, and DB/2 Instance IP 1016. These resources are currently online.
  • A dependency may be defined between two resources. For example, DB/2 instance 1012 may depend on DB/2 Instance Mount Point 1014. Thus, DB2 instance 1012 may not start until after DB/2 Instance Mount Point 1014 is completed. This dependency may be modeled by using relationship 820 in FIG. 8.
  • Similar to DB/2 Node 1 1004, DB/2 Node 2 1006 also includes a number of resources, including DB/2 Instance IP 1018, DB/2 Instance Mount Point 1020, DB/2 Instance 1022, nts server IP 1024, and nts server resources 1026. However, unlike the resources in DB/2 Node 1 1004, these resources are offline or on standby. With cluster domain 900 and cluster resource logical device operations 930 in FIG. 9, administrator may use easily change the configuration of DB/2 Node 1 1004 and DB/2 Node 2 1006 to place these resources online.
  • Furthermore, resources from DB/2 Node 1 1004 and DB/2 Node 2 1006 may be grouped into a number of resource groups and placed online. For example, nts server resources 1008 and nts server IP 1010 from DB/2 Node 1 1004 and nts server IP 1024 and nts server resources 1026 from DB/2 Node 2 1006 may be grouped into nts resource group 1030, while DB/2 instance 1012, DB/2 Instance Mount Point 1014, DB/2 Instance IP 1016 from DB/2 Node 1 1004 and DB/2 Instance IP 1018, DB/2 Instance Mount Point 1020, and DB/2 Instance 1022 from DB/2 Node 2 1006 may be grouped into DB/2 resource group 1032. Nts resource group 1030 and DB/2 resource group 1032 may then be placed online using cluster resource logical device operations.
  • In summary, a common cluster model is provided by the present invention that understands vendor specific clustering technologies and its data model. Administrators may use the common cluster model and its logical device operations to configure, manage, and operate clusters, cluster nodes, and their resources without the knowledge of vendor specific clustering technologies. Administrators are no longer required to learn each vendor specific data model and operations in order to configure vendor specific clustering technologies. As a result, administrators may save significant time and efforts in configuring, managing, and operating clusters.
  • It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer usable medium of instructions and a variety of forms and that embodiments of the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer usable media include recordable-type media such a floppy disc, a hard disk drive, a RAM, and CD-ROMs and transmission-type media such as digital and analog communications links.
  • The description of the embodiments of the present invention have been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (20)

1. A method in a data processing system for a common cluster model for configuring, managing, and operating different clustering technologies in a data center, the method comprising:
defining a cluster domain for a clustering technology, wherein the cluster domain includes at least one cluster node;
associating at least one cluster resource with one of the cluster domain and the at least one cluster node; and
associating at least one property with one of the cluster domain and the at least one cluster resource.
2. The method of claim 1, further comprising:
defining a set of cluster domain operations, wherein the set of cluster domain operations operates on the at least cluster domain and the at least one cluster node; and
defining a set of cluster resource operations, wherein the set of cluster resource operations operates on the at least one cluster resource and the at least one property.
3. The method of claim 2, further comprising:
associating one or more of the at least one cluster resource with a resource group; and
defining a set of resource group properties for the resource group, wherein the set of cluster resource operations operates on the resource group and the set of resource group properties.
4. The method of claim 1, wherein the cluster domain is one of a peer cluster domain and a management server cluster domain, and wherein the at least one cluster node is one of a server and a software instance.
5. The method of claim 1, wherein the cluster domain includes another cluster domain.
6. The method of claim 1, wherein the cluster domain and the at least one cluster node includes an observed state and a desired state, wherein the observed state indicates a current state.
7. The method of claim 1, wherein the at least one cluster resource is a type of software resource, and wherein the at least one cluster resource includes a relationship to another cluster resource.
8. The method of claim 1, wherein the at least one cluster resource is aggregated to form a failover cluster resource for a high availability cluster, wherein the failover cluster resource includes redundancy of the at least one cluster resource.
9. The method of claim 3, wherein the resource group includes another resource group, and wherein the resource group depends on another resource group.
10. The method of claim 1, wherein the at least one clustering resource is located in one of a same data processing system and a different data processing systems.
11. The method of claim 3, further comprising:
associating individual workflows and device driver with one of the cluster domain, the at least one cluster resource, and the resource group, wherein the individual workflows and device driver interact with the different clustering technologies.
12. The method of claim 2, wherein the set of cluster domain operations includes a config operation, wherein the config operation looks up the at least one cluster node associated with the cluster domain and at least one cluster resource associated with the at least cluster node.
13. The method of claim 2, wherein the set of cluster domain operations includes a start operation, a stop operation, an add node operation, a remove node operation, a start node operation, a stop node operation, a create resource operation, and a create resource group operation, an update domain status operation, and an update node status operation.
14. The method of claim 2, wherein the set of cluster resource operations include a start operation, a stop operation, an add group member operation, a create dependency operation, an update operation, and a remove resource operation.
15. A data processing system comprising:
a bus;
a memory connected to a bus, wherein a set of instructions are located in the memory; and
a processing unit connected to the bus, wherein the processing unit executes the set of instructions to define a cluster domain for a clustering technology, wherein the cluster domain includes at least one cluster node; associate at least one cluster resource with one of the cluster domain and the at least one cluster node; and associate at least one property with one of the cluster domain and the at least one cluster resource.
16. The data processing system of claim 15, wherein the processing unit further executes the set of instructions to define a set of cluster domain operations, wherein the set of cluster domain operations operates on the at least cluster domain and the at least one cluster node; and define a set of cluster resource operations, wherein the set of cluster resource operations operates on the at least one cluster resource and the at least one property.
17. The data processing system of claim 16, wherein the processing unit further executes the set of instructions to associate one or more of the at least one cluster resource with a resource group; define a set of resource group properties for the resource group, wherein the set of cluster resource operations operates on the resource group and the set of resource group properties.
18. A computer program product comprising computer executable instructions embodied in a computer usable medium for configuring, managing, and operating different clustering technologies in a data center, the computer program product comprising:
first instructions for defining a cluster domain for a clustering technology, wherein the cluster domain includes at least one cluster node;
second instructions for associating at least one cluster resource with one of the cluster domain and the at least one cluster node; and
third instructions for associating at least one property with one of the cluster domain and the at least one cluster resource.
19. The computer program product of claim 18, further comprising:
fourth instructions for defining a set of cluster domain operations, wherein the set of cluster domain operations operates on the at least cluster domain and the at least one cluster node; and
fifth instructions for defining a set of cluster resource operations, wherein the set of cluster resource operations operates on the at least one cluster resource and the at least one property.
20. The computer program product of claim 19, further comprising:
sixth instructions for associating one or more of the at least one cluster resource with a resource group; and
seventh instructions for defining a set of resource group properties for the resource group, wherein the set of cluster resource operations operates on the resource group and the set of resource group properties.
US11/117,137 2005-04-28 2005-04-28 Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center Abandoned US20060248371A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/117,137 US20060248371A1 (en) 2005-04-28 2005-04-28 Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center
US12/622,297 US8843561B2 (en) 2005-04-28 2009-11-19 Common cluster model for configuring, managing, and operating different clustering technologies in a data center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/117,137 US20060248371A1 (en) 2005-04-28 2005-04-28 Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/622,297 Continuation US8843561B2 (en) 2005-04-28 2009-11-19 Common cluster model for configuring, managing, and operating different clustering technologies in a data center

Publications (1)

Publication Number Publication Date
US20060248371A1 true US20060248371A1 (en) 2006-11-02

Family

ID=37235836

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/117,137 Abandoned US20060248371A1 (en) 2005-04-28 2005-04-28 Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center
US12/622,297 Expired - Fee Related US8843561B2 (en) 2005-04-28 2009-11-19 Common cluster model for configuring, managing, and operating different clustering technologies in a data center

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/622,297 Expired - Fee Related US8843561B2 (en) 2005-04-28 2009-11-19 Common cluster model for configuring, managing, and operating different clustering technologies in a data center

Country Status (1)

Country Link
US (2) US20060248371A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070078911A1 (en) * 2005-10-04 2007-04-05 Ken Lee Replicating data across the nodes in a cluster environment
US20080010315A1 (en) * 2005-12-30 2008-01-10 Augmentix Corporation Platform management of high-availability computer systems
US20080077466A1 (en) * 2006-09-26 2008-03-27 Garrett Andrew J System and method of providing snapshot to support approval of workflow changes
US20090113051A1 (en) * 2007-10-30 2009-04-30 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
US20090307246A1 (en) * 2008-06-04 2009-12-10 Klaus Schuler System and Method for a Genetic Integration of a Database into a High Availability Cluster
US20100064009A1 (en) * 2005-04-28 2010-03-11 International Business Machines Corporation Method and Apparatus for a Common Cluster Model for Configuring, Managing, and Operating Different Clustering Technologies in a Data Center
US20110289343A1 (en) * 2010-05-21 2011-11-24 Schaefer Diane E Managing the Cluster
US20120079448A1 (en) * 2005-11-02 2012-03-29 Openlogic, Inc. Stack or Project Extensibility and Certification for Staking Tool
CN103229463A (en) * 2012-12-18 2013-07-31 华为技术有限公司 Method for determining administrative domains and network devices and virtual cluster
EP2572273A4 (en) * 2010-05-21 2015-07-22 Unisys Corp Configuring the cluster
US20160359808A1 (en) * 2011-02-16 2016-12-08 Fortinet, Inc. Load balancing among a cluster of firewall security devices
US9547858B2 (en) 2012-11-28 2017-01-17 Bank Of America Corporation Real-time multi master transaction
US10230588B2 (en) * 2005-07-07 2019-03-12 Sciencelogic, Inc. Dynamically deployable self configuring distributed network management system using a trust domain specification to authorize execution of network collection software on hardware components
CN115460074A (en) * 2018-11-16 2022-12-09 瞻博网络公司 Network controller sub-cluster for distributed computing deployment

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9069619B2 (en) * 2010-01-15 2015-06-30 Oracle International Corporation Self-testable HA framework library infrastructure
US8949425B2 (en) 2010-01-15 2015-02-03 Oracle International Corporation “Local resource” type as a way to automate management of infrastructure resources in oracle clusterware
US8583798B2 (en) * 2010-01-15 2013-11-12 Oracle International Corporation Unidirectional resource and type dependencies in oracle clusterware
US9207987B2 (en) * 2010-01-15 2015-12-08 Oracle International Corporation Dispersion dependency in oracle clusterware
US20110179173A1 (en) * 2010-01-15 2011-07-21 Carol Colrain Conditional dependency in a computing cluster
US8438573B2 (en) * 2010-01-15 2013-05-07 Oracle International Corporation Dependency on a resource type
US9098334B2 (en) * 2010-01-15 2015-08-04 Oracle International Corporation Special values in oracle clusterware resource profiles
US9497224B2 (en) 2011-08-09 2016-11-15 CloudPassage, Inc. Systems and methods for implementing computer security
US9124640B2 (en) 2011-08-09 2015-09-01 CloudPassage, Inc. Systems and methods for implementing computer security
US8412945B2 (en) 2011-08-09 2013-04-02 CloudPassage, Inc. Systems and methods for implementing security in a cloud computing environment
FR2995106B1 (en) * 2012-09-03 2015-08-14 Bull Sas METHOD AND DEVICE FOR PROCESSING CONTROLS IN A SET OF COMPUTER ELEMENTS
US9882919B2 (en) 2013-04-10 2018-01-30 Illumio, Inc. Distributed network security using a logical multi-dimensional label-based policy model
WO2014169062A1 (en) 2013-04-10 2014-10-16 Illumio, Inc. Distributed network management system using a logical multi-dimensional label-based policy model
US9563261B2 (en) 2014-11-25 2017-02-07 International Business Machines Corporation Management of power consumption in large computing clusters
US9917740B2 (en) * 2015-09-09 2018-03-13 International Business Machines Corporation Reducing internodal communications in a clustered system
JP6950437B2 (en) * 2017-10-10 2021-10-13 富士通株式会社 Information processing system, information processing device and program
US11416563B1 (en) * 2017-10-20 2022-08-16 Amazon Technologies, Inc. Query language for selecting and addressing resources

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4731860A (en) * 1985-06-19 1988-03-15 International Business Machines Corporation Method for identifying three-dimensional objects using two-dimensional images
US4837831A (en) * 1986-10-15 1989-06-06 Dragon Systems, Inc. Method for creating and using multiple-word sound models in speech recognition
US20010056461A1 (en) * 2000-05-02 2001-12-27 Sun Microsystems, Inc. Cluster configuration repository
US6338112B1 (en) * 1997-02-21 2002-01-08 Novell, Inc. Resource management in a clustered computer system
US20020042693A1 (en) * 2000-05-02 2002-04-11 Sun Microsystems, Inc. Cluster membership monitor
US20030041138A1 (en) * 2000-05-02 2003-02-27 Sun Microsystems, Inc. Cluster membership monitor
US20030214525A1 (en) * 2001-07-06 2003-11-20 Esfahany Kouros H. System and method for managing object based clusters
US6826568B2 (en) * 2001-12-20 2004-11-30 Microsoft Corporation Methods and system for model matching
US6854069B2 (en) * 2000-05-02 2005-02-08 Sun Microsystems Inc. Method and system for achieving high availability in a networked computer system
US7139925B2 (en) * 2002-04-29 2006-11-21 Sun Microsystems, Inc. System and method for dynamic cluster adjustment to node failures in a distributed data system
US20080216082A1 (en) * 2004-01-30 2008-09-04 Tamar Eilam Hierarchical Resource Management for a Computing Utility

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6438705B1 (en) * 1999-01-29 2002-08-20 International Business Machines Corporation Method and apparatus for building and managing multi-clustered computer systems
US7529822B2 (en) * 2002-05-31 2009-05-05 Symantec Operating Corporation Business continuation policy for server consolidation environment
KR100553920B1 (en) * 2003-02-13 2006-02-24 인터내셔널 비지네스 머신즈 코포레이션 Method for operating a computer cluster
US20050171752A1 (en) * 2004-01-29 2005-08-04 Patrizio Jonathan P. Failure-response simulator for computer clusters
US20060248371A1 (en) * 2005-04-28 2006-11-02 International Business Machines Corporation Method and apparatus for a common cluster model for configuring, managing, and operating different clustering technologies in a data center

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4731860A (en) * 1985-06-19 1988-03-15 International Business Machines Corporation Method for identifying three-dimensional objects using two-dimensional images
US4837831A (en) * 1986-10-15 1989-06-06 Dragon Systems, Inc. Method for creating and using multiple-word sound models in speech recognition
US6338112B1 (en) * 1997-02-21 2002-01-08 Novell, Inc. Resource management in a clustered computer system
US20010056461A1 (en) * 2000-05-02 2001-12-27 Sun Microsystems, Inc. Cluster configuration repository
US20020042693A1 (en) * 2000-05-02 2002-04-11 Sun Microsystems, Inc. Cluster membership monitor
US20030041138A1 (en) * 2000-05-02 2003-02-27 Sun Microsystems, Inc. Cluster membership monitor
US6854069B2 (en) * 2000-05-02 2005-02-08 Sun Microsystems Inc. Method and system for achieving high availability in a networked computer system
US20030214525A1 (en) * 2001-07-06 2003-11-20 Esfahany Kouros H. System and method for managing object based clusters
US6826568B2 (en) * 2001-12-20 2004-11-30 Microsoft Corporation Methods and system for model matching
US7139925B2 (en) * 2002-04-29 2006-11-21 Sun Microsystems, Inc. System and method for dynamic cluster adjustment to node failures in a distributed data system
US20080216082A1 (en) * 2004-01-30 2008-09-04 Tamar Eilam Hierarchical Resource Management for a Computing Utility

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100064009A1 (en) * 2005-04-28 2010-03-11 International Business Machines Corporation Method and Apparatus for a Common Cluster Model for Configuring, Managing, and Operating Different Clustering Technologies in a Data Center
US10230588B2 (en) * 2005-07-07 2019-03-12 Sciencelogic, Inc. Dynamically deployable self configuring distributed network management system using a trust domain specification to authorize execution of network collection software on hardware components
US20070078911A1 (en) * 2005-10-04 2007-04-05 Ken Lee Replicating data across the nodes in a cluster environment
US7693882B2 (en) * 2005-10-04 2010-04-06 Oracle International Corporation Replicating data across the nodes in a cluster environment
US20120079448A1 (en) * 2005-11-02 2012-03-29 Openlogic, Inc. Stack or Project Extensibility and Certification for Staking Tool
US7805734B2 (en) * 2005-12-30 2010-09-28 Augmentix Corporation Platform management of high-availability computer systems
US20080010315A1 (en) * 2005-12-30 2008-01-10 Augmentix Corporation Platform management of high-availability computer systems
US8626557B2 (en) * 2006-09-26 2014-01-07 International Business Machines Corporation System and method of providing snapshot to support approval of workflow changes
US20080077466A1 (en) * 2006-09-26 2008-03-27 Garrett Andrew J System and method of providing snapshot to support approval of workflow changes
US20090113051A1 (en) * 2007-10-30 2009-04-30 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
US7822841B2 (en) 2007-10-30 2010-10-26 Modern Grids, Inc. Method and system for hosting multiple, customized computing clusters
US20110023104A1 (en) * 2007-10-30 2011-01-27 Modern Grids, Inc. System for hosting customized computing clusters
US8352584B2 (en) 2007-10-30 2013-01-08 Light Refracture Ltd., Llc System for hosting customized computing clusters
US20090307246A1 (en) * 2008-06-04 2009-12-10 Klaus Schuler System and Method for a Genetic Integration of a Database into a High Availability Cluster
US8135765B2 (en) * 2008-06-04 2012-03-13 Software Ag System and method for a genetic integration of a database into a high availability cluster
US20110289343A1 (en) * 2010-05-21 2011-11-24 Schaefer Diane E Managing the Cluster
EP2572273A4 (en) * 2010-05-21 2015-07-22 Unisys Corp Configuring the cluster
US20160359808A1 (en) * 2011-02-16 2016-12-08 Fortinet, Inc. Load balancing among a cluster of firewall security devices
US9825912B2 (en) * 2011-02-16 2017-11-21 Fortinet, Inc. Load balancing among a cluster of firewall security devices
US10084751B2 (en) 2011-02-16 2018-09-25 Fortinet, Inc. Load balancing among a cluster of firewall security devices
US9547858B2 (en) 2012-11-28 2017-01-17 Bank Of America Corporation Real-time multi master transaction
US9699080B2 (en) 2012-12-18 2017-07-04 Huawei Technologies Co., Ltd. Method for determining management domain, network device, and virtual cluster
US9973427B2 (en) 2012-12-18 2018-05-15 Huawei Technologies Co., Ltd. Method for determining management domain, network device, and virtual cluster
CN103229463A (en) * 2012-12-18 2013-07-31 华为技术有限公司 Method for determining administrative domains and network devices and virtual cluster
CN115460074A (en) * 2018-11-16 2022-12-09 瞻博网络公司 Network controller sub-cluster for distributed computing deployment

Also Published As

Publication number Publication date
US20100064009A1 (en) 2010-03-11
US8843561B2 (en) 2014-09-23

Similar Documents

Publication Publication Date Title
US8843561B2 (en) Common cluster model for configuring, managing, and operating different clustering technologies in a data center
US11500670B2 (en) Computing service with configurable virtualization control levels and accelerated launches
US11687422B2 (en) Server clustering in a computing-on-demand system
US8589916B2 (en) Deploying and instantiating multiple instances of applications in automated data centers using application deployment template
US7565310B2 (en) Method and system and program product for a design pattern for automating service provisioning
US8032625B2 (en) Method and system for a network management framework with redundant failover methodology
US9264296B2 (en) Continuous upgrading of computers in a load balanced environment
US7480713B2 (en) Method and system for network management with redundant monitoring and categorization of endpoints
US9329905B2 (en) Method and apparatus for configuring, monitoring and/or managing resource groups including a virtual machine
US20030009540A1 (en) Method and system for presentation and specification of distributed multi-customer configuration management within a network management framework
US20060085530A1 (en) Method and apparatus for configuring, monitoring and/or managing resource groups using web services
US20070088630A1 (en) Assessment and/or deployment of computer network component(s)
US20070156860A1 (en) Implementing computer application topologies on virtual machines
US20080239985A1 (en) Method and apparatus for a services model based provisioning in a multitenant environment
US20030009657A1 (en) Method and system for booting of a target device in a network management system
US20060085668A1 (en) Method and apparatus for configuring, monitoring and/or managing resource groups
JP2009199528A (en) Computer system for managing service process including two or more service steps, and method and computer program therefor
US10230567B2 (en) Management of a plurality of system control networks
EP2815346A1 (en) Coordination of processes in cloud computing environments
US8204972B2 (en) Management of logical networks for multiple customers within a network management framework
US20060293877A1 (en) Method and apparatus for uni-lingual workflow usage in multi-lingual data center environments
EP1061445A2 (en) Web-based enterprise management with transport neutral client interface
US20050283531A1 (en) Method and apparatus for combining resource properties and device operations using stateful Web services
Watters Solaris 8 Administrator's Guide: Help for Network Administrators
Shapiro Windows server 2008 bible

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, MING;LUMPP, THOMAS JUERGEN;MUELLER, MARKUS;AND OTHERS;REEL/FRAME:016300/0608;SIGNING DATES FROM 20050422 TO 20050427

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE