EP1761867A2 - Clusterization mit automatisiertem einsatz einer cluster-unaware-anwendung - Google Patents
Clusterization mit automatisiertem einsatz einer cluster-unaware-anwendungInfo
- Publication number
- EP1761867A2 EP1761867A2 EP05738713A EP05738713A EP1761867A2 EP 1761867 A2 EP1761867 A2 EP 1761867A2 EP 05738713 A EP05738713 A EP 05738713A EP 05738713 A EP05738713 A EP 05738713A EP 1761867 A2 EP1761867 A2 EP 1761867A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- cluster
- group
- processor
- node
- participating nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
Definitions
- Embodiments of the invention are in the field of clusterization of software applications, and more specifically, relate to a method of clusterizing a cluster- unaware application with automated deployment without modifying the cluster- unaware application.
- a cluster is a group of computers that work together to run a common set of applications and appear as a single system to the client and applications.
- the computers are physically connected by cables and programmatically connected by cluster software. These connections allow the computers to use failover and load balancing, which is not possible with a stand-alone computer.
- Clustering provided by cluster software such as Microsoft Cluster
- MSCS Microsoft Corporation
- MSCS Mission- critical applications
- the cluster is designed so as to avoid a single point-of-failure.
- Applications can be distributed over more than one computer (also called node), achieving a degree of parallelism and failure recovery, and providing more availability.
- Multiple nodes in a cluster remain in constant communication. If one of the nodes in a cluster becomes unavailable as a result of failure or maintenance, another node takes over the failing node's workload and begins providing service. This process is known as failover.
- failover With very high availability, users who were accessing the service would be able to continue to access the service, and would be unaware that the service was briefly interrupted and is now provided by a different node.
- An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. DLL files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed automatically in the cluster. The DLL files corresponding to the custom resource type are deployed automatically on the participating nodes in the cluster.
- Figure 1 is a diagram illustrating a system in which one embodiment of the invention can be practiced.
- Figure 2 shows the information that forms the clusterized application X residing on a node in one embodiment of the present invention.
- Figure 3 is a flowchart illustrating the method of the present invention.
- Figure 4 illustrates the process of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308 of Figure 3).
- Figure 5 illustrates an embodiment of the optional process of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (optional block 310 of Figure 3).
- Figure 6 illustrates an embodiment of the process of deploying the binaries and the data files in the cluster (block 312 of Figure 3).
- Figure 7 illustrates an embodiment of the process of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314 of Figure 3).
- Figure 8 illustrates an embodiment of the optional process of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (optional block 316 of Figure 3).
- An embodiment of the present invention is a technique for clusterizing a cluster-unaware application. Based on an analysis of the cluster-unaware application, the binaries of the cluster-unaware application are differentiated from the data files of the cluster-unaware application. Dynamic Link Library (DLL) files corresponding to a custom resource type for the cluster-unaware application are created based on the behavior of the cluster-unaware application. A cluster and participating nodes in the cluster are identified. A cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster is created. The binaries and the data files are deployed in the cluster. The DLL files corresponding to the custom resource type are deployed on the participating nodes in the cluster.
- DLL Dynamic Link Library
- a clusterized application is an application capable of running in a cluster environment.
- a cluster-unaware application can be clusterized if it has the following characteristics. First, the application uses the TCP/IP as a network protocol. Second, the application maintains data in a configurable location.
- the application supports transaction processing.
- FIG. 1 is a diagram illustrating an exemplary system 100 in which one embodiment of the invention can be practiced.
- the system 100 includes a server system 104 interfacing with a client 180 and with a processor platform 190.
- the client 180 communicates with the server system via a communication network.
- the client can access an application running on the server system using the virtual Internet Protocol (IP) address of the application.
- IP Internet Protocol
- the server system 104 includes a cluster 106.
- the cluster 106 includes a node 110, a node 160, and a common storage 170.
- Each of the nodes 110, 140 is a computer system.
- Node 110 comprises a memory 120, a processor unit 130 and an input/output unit 132.
- node 140 comprises a memory 150, a processor unit 160 and an input/output unit 162.
- Each processor unit may include several elements such as data queue, arithmetic logical unit, memory read register, memory write register, etc.
- Cluster software such as the Microsoft Cluster Service (MSCS) provides clustering services for a cluster. In order for the system 106 to operate as a cluster, identical copies of the cluster software must be running on each of the nodes 110, 140. Copy 122 of the cluster software resides in the memory 120 of node 110. Copy 152 of the cluster software resides in the memory 150 of node 140.
- MSCS Microsoft Cluster Service
- a cluster folder containing cluster-level information is included in the memory of each of the nodes of the cluster.
- Cluster-level information includes DLL files of the applications that are running in the cluster.
- Cluster folder 128 is included in the memory 120 of node 110.
- Cluster folder 158 is included in the memory 150 of node 140.
- a group of cluster-aware applications 126 is stored in the memory 120 of node 110. Identical copies 156 of these applications are stored in the memory 150 of node 140.
- Application X is a cluster-unaware application. The present invention clusterizes Application X so that it could be run in a cluster and stored the clusterized application X in the memory 120 of node 110. An identical copy of the clusterized application X is stored in the memory 150 of node 140.
- Computer nodes 110 and 140 access a common storage 170.
- the common storage 170 contains information that is shared by the nodes in the cluster. This information includes data of the applications running in the cluster. Typically, only one computer node can access the common storage at a time. It is noted that, in other cluster configurations, using different type of cluster software and different type of operating system for the computer nodes in the cluster, a common storage may not be needed. In such a cluster with no common storage, data for the clustered applications are stored with the clustered applications, and are copied and updated for each of the nodes in the cluster.
- the processor platform 190 is a computer system that interfaces with the cluster 104. It includes a processor 192, a memory 194, and a mass storage device 196.
- the processor 192 represents a central processing unit of any type of architecture, such as embedded processors, mobile processors, microcontrollers, digital signal processors, superscalar computers, vector processors, single instruction multiple data (SIMD) computers, complex instruction set computers (CISC), reduced instruction set computers (RISC), very long instruction word (VLIW), or hybrid architecture.
- the memory 194 stores system code and data.
- the memory 194 is typically implemented with dynamic random access memory (DRAM) or static random access memory (SRAM).
- the system memory may include program code or code segments implementing one embodiment of the invention.
- the memory 194 includes a clusterizer of the present invention when loaded from mass storage 196. The clusterizer may also simulate the clusterizing functions described herein.
- the clusterizer contains instructions that, when executed by the processor 192, cause the processor to perform the tasks or operations as described in the following.
- the mass storage device 196 stores archive information such as code, programs, files, data, databases, applications, and operating systems.
- the mass storage device 196 may include compact disk (CD) ROM, a digital video/versatile disc (DVD), floppy drive, and hard drive, and any other magnetic or optic storage devices such as tape drive, tape library, redundant arrays of inexpensive disks (RAIDs), etc.
- the mass storage device 196 provides a mechanism to read machine-accessible media.
- the machine-accessible media may contain computer readable program code to perform tasks as described in the following.
- FIG. 2 shows the information that forms the clusterized application X 126 (respectively 156) residing on node 110 (respectively, node 140) in one embodiment of the present invention.
- the cluster-unaware application X comprises the binaries and the data files. After clusterization, the binaries of application X are stored in the clusterized application X on each of the participating nodes of the cluster, while the data files of the application X are stored in the common storage 170 ( Figure 1). It is noted that, in effect, clusterization using the method of the present invention has made no modification to the binaries and the data files per se of the cluster-unaware application X.
- Clusterized application X 126 comprises the binaries 202 of the cluster- unaware application X.
- the clusterized application X 126 also comprises basic cluster resources 204 for the application X, and an instance of the custom resource type represented by DLL files 206.
- the basic cluster resources 204 and the instance of the custom resource type 206 are logical objects created by the cluster at cluster- level.
- the basic cluster resources 204 include a common storage resource identifying the common storage 170, an application IP address resource identifying the IP address of the clusterized application X, and a network name resource identifying the network name of the clusterized application X.
- the DLL files 206 for the custom resource type include the cluster resource dynamic link library (DLL) file and the cluster administrator extension DLL file. These DLL files are stored in the cluster folder 128 in node 110 ( Figure 1).
- FIG. 3 is a flowchart illustrating the method of the present invention.
- block 302 and 304 of the flowchart are performed manually.
- the cluster-unaware application X is analyzed and, based on this analysis, the binaries of the cluster-unaware application X are differentiated from its data files (block 302). Binaries are a file or files that contain the executable form of the program code whose execution causes the application to run.
- Binaries do not include data files required by the application since data generally needs to be updated and data size is usually very large.
- the behavior of the cluster-unaware application X is also analyzed in order to determine a custom resource type for the cluster-unaware application.
- a custom resource type means that the implemented resource type is different from the standard or out-of-the-box Microsoft cluster resource such as IP Address resource or WINS service resource.
- the behavior analysis includes determination of how the cluster-unaware application behaves when it starts, or restarts, or stops, and how its health can be checked on a periodic basis. Based on this behavior analysis, DLL files corresponding to and defining the custom resource type for the cluster-unaware application are created (block 304).
- these DLL files are used to send command requests to the cluster- unaware application X to control its behavior.
- these custom resource DLL files are created using the Microsoft Visual C++® development system.
- Microsoft Corporation has published a number of Technical Articles for Writing Microsoft Cluster Server (MSCS) Resource DLLs. These articles describe in detail how to use the Microsoft Visual C++® development system to develop resource DLLs.
- Resource DLLs are created by running the "Resource Type AppWizard" of Microsoft Corporation within the developer studio. This builds a skeletal resource DLL and/or Cluster Administrator extension DLL with all the entry points defined, declared, and exported.
- the skeletal resource DLL provides only the most basic failover and tailback capability.
- the skeletal resource DLL is customized to produce the cluster resource DLL.
- the behavior of the cluster resource DLL depends on the necessary needs of the application being clusterized, which may include some or all of the following functions/features: Startup Open Online LooksAlive IsAlive Offline Close Terminate ResourceControl ResourceTypeControl
- the cluster resource DLL and the cluster administrator extension DLL allow application X to function properly in a cluster environment.
- the capabilities of the DLLs are directly related to the capabilities of the cluster- unaware application X.
- the cluster resource DLL file includes program code that can bring the application X on-line in an orderly way by starting underlying programs of application X. It also includes program code for taking the application X off-line in an orderly way by performing required actions to save the application state information before stopping the services of application X.
- the following functionality is implemented in the cluster resource DLL: • Ability to bring the application resource on-line by starting the application components in the required order. • Ability to take the application resource off-line by stopping application components in the required order. This might involve, in case of some applications, sending commands to the application to persist application data and/or other data that is needed when the application is failed over to another node in the cluster.
- Process 300 identified a cluster and participating nodes in the cluster (block 306).
- the cluster and the participating nodes may be identified via user input to a dialog box, or via default settings.
- Process 300 allows a user to select a subset of nodes or all the nodes in the cluster to participate in the clusterization of the application X.
- Process 300 creates a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308). After creating the group of basic cluster resources, process 300 can perform the optional step of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (block 310).
- Process 300 deploys the binaries and the data files in the cluster (block 312) and deploys the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314).
- process 300 can perform the optional step of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (block 316).
- Process 300 then terminates.
- Figure 4 illustrates the process 308 of creating a cluster group including a group of basic cluster resources for hosting the cluster-unaware application in the cluster (block 308 of Figure 3).
- process 308 associates a specified cluster group name with the cluster group (block 402).
- Process 308 verifies whether a cluster group by the specified name already exists in the cluster. If it does not already exist, process 308 creates the cluster group with the specified name.
- Process 308 includes a specified common storage resource in the cluster group, the specified common storage resource identifying a common storage where the data files of the application X reside (block 404).
- Process 308 verifies whether the specified common storage resource already exists in the specified cluster group. If it does not already exist in this cluster group but exists in another cluster group, process 308 moves the specified common storage resource from this other cluster group to the specified cluster group. If there are any dependent cluster resources in this other cluster group, the move will fail automatically and the user will be notified of what went wrong. Note that this is the common storage where ail data specific to the clusterized application X will be stored for access by all cluster nodes that are participating in the clusterization of application X.
- Process 308 includes a specified IP address resource in the cluster group, the specified IP address resource identifying an IP address to be used to support the clusterized application X name (block 406). Process 308 verifies whether the specified IP address already exists in the cluster as an IP address cluster resource.
- process 308 creates the IP address resource in the specified cluster group per user specification. If the IP address already exists as a cluster resource in this cluster, process 308 verifies if it already exists in the specified cluster group. If not, process 308 moves the IP address resource to the specified cluster group from its current cluster group. Note that this is the IP address that is used to host the virtual application X name by which all program clients of application X access the clusterized application X.
- Process 308 includes a specified network name resource in the cluster group, the specified network name resource identifying a network name to be used by client programs to connect to the clusterized application X (block 408). Process 308 verifies if the specified network name already exists in the cluster as a network name cluster resource.
- process 308 creates the network name resource in the cluster group per user specification. If the network name exists as a cluster resource in this cluster, process 308 verifies if it already exists in the specified cluster group. If not, process 308 moves the network name resource to the specified cluster group from its current cluster group. Note that this is the network name by which all program clients of application X will access the clusterized application X.
- Figure 5 illustrates an embodiment of the optional process 310 of verifying that the group of basic cluster resources is capable of being hosted by each of the participating nodes (optional block 310 of Figure 3). Process 310 performs confidence tests on the group of basic cluster resources to verify that the target cluster is operational and is currently in a healthy state.
- the basic cluster resources in the cluster group are capable of being hosted by each of the participating nodes. If, for example, one of the participating nodes cannot access the common storage specified in the cluster group, this indicates a problem that will prevent successful clusterization. This also indicates, at a higher level, the health of the overall cluster and its nodes.
- process 310 Upon Start, process 310 brings the group of basic cluster resources online on a current node of the participating nodes (block 502). Process 310 fails over the group of basic cluster resources to another node in the group of the participating nodes (block 504). Process 310 then takes the group of basic cluster resources off-line (block 506). Process 310 repeats blocks 402 through 406 for each remaining node of the participating nodes (block 508).
- EOD Ease of Deployment
- Figure 6 illustrates an embodiment of the process 312 of deploying the binaries and the data files in the cluster (block 312 of Figure 3).
- process 312 Upon Start, process 312 verifies that a participating node meets software requirements of the cluster-unaware application by executing a first install program on the participating node (block 602). This block 602 is optional. Process 312 only executes this block 602 if a cluster-unaware application to be clusterized has software requirements.
- Process 312 installs the binaries and the data files by executing a second install program on the participating node (block 604). Specifically, process 312 installs the binaries on the participating node and installs the data files on the common storage identified by the specified common storage resource in the cluster group (common storage 170 of Figure 1). The binaries are installed locally on each participating node, but a the same location across all the participating nodes (for example, at location C: ⁇ Program Files ⁇ APPLX ⁇ APPLX9.1 on each node). Installing data on the common storage allows the data to be accessible to all the nodes participating in the clusterization of application X.
- Process 312 installs at least one configuration name for the cluster- unaware application and the network name for the cluster-unaware application by executing a third install program (block 606).
- the configuration name determines the service name. There may be more than one configuration name, each corresponding to a service.
- the network name previously specified by the network name resource in the cluster group, is the name to be used by the client programs to connect to the clusterized application X. Process 312 then terminates.
- FIG 7 illustrates an embodiment of the process 314 of deploying the DLL files corresponding to the custom resource type on the participating nodes in the cluster (block 314 of Figure 3).
- the DLL files are the cluster resource DLL and the cluster administrator extension DLL.
- process 314 installs the cluster resource DLL and the cluster administrator extension DLL on each of the participating nodes (block 702). Installing these DLL files includes storing them in the cluster folder of each of the participating nodes.
- Process 314 registers the custom resource type, defined by the cluster resource DLL and the cluster administrator extension DLL, with the cluster so that the cluster is aware of this custom resource type (block 704). Process 314 then terminates.
- Figure 8 illustrates an embodiment of the optional process of performing a diagnostics test procedure to verify that the cluster-unaware application has been properly clusterized (optional block 316 of Figure 3).
- Process 316 performs diagnostics tests on the cluster group to verify that the clusterized application X can operate in the target cluster.
- the diagnostics tests also include an application specific test to test that the client programs of the clusterized application X can connect to the clusterized application X.
- the application specific test acts as an application client and verifies the client connectivity.
- Process 316 performs a number of tests including: cluster group fail-over to all possible nodes, checking of online/offline functionality, and testing of the ability to shutdown and start each of the nodes. The automated testing of integrity ensures that each node and each functional element is systematically and completely tested.
- process 316 Upon Start, process 316 brings the cluster group, which now includes the group of basic cluster resources and the custom resource type, on-line on a current node of the participating nodes (block 802).
- Process 316 fails over the cluster group to another node of the participating nodes (block 804).
- Process 316 then takes the cluster group off-line (block 806).
- Process 316 repeats blocks 802 through 806 for each remaining node of the participating nodes (block 808).
- Process 316 then shuts down the current node (block 810) and verifies that the cluster group fails over properly to another node of the participating nodes (block 812).
- Process 316 repeats blocks 810 and 812 for each remaining node of the participating nodes (block 814).
- the cluster group comes back on-line where they existed at the start of the test. If any part of this testing process fails, the clusterization process is considered unsuccessful.
- Elements of one embodiment of the invention may be implemented by hardware, firmware, software or any combination thereof.
- hardware generally refers to an element having a physical structure such as electronic, electromagnetic, optical, electro-optical, mechanical, electro-mechanical parts, etc.
- software generally refers to a logical structure, a method, a procedure, a program, a routine, a process, an algorithm, a formula, a function, an expression, etc.
- firmware generally refers to a logical structure, a method, a procedure, a program, a routine, a process, an algorithm, a formula, a function, an expression, etc that is implemented or embodied in a hardware structure (e.g., flash memory, ROM, EROM).
- firmware may include microcode, writable control store, micro-programmed structure.
- the elements of an embodiment of the present invention are essentially the code segments to perform the necessary tasks.
- the software/firmware may include the actual code to carry out the operations described in one embodiment of the invention, or code that emulates or simulates the operations.
- the program or code segments can be stored in a processor or machine accessible medium or transmitted by a computer data signal embodied in a carrier wave, or a signal modulated by a carrier, over a transmission medium.
- the "processor readable or accessible medium” or “machine readable or accessible medium” may include any medium that can store, transmit, or transfer information.
- Examples of the processor readable or machine accessible medium include an electronic circuit, a semiconductor memory device, a read only memory (ROM), a flash memory, an erasable ROM (EROM), a floppy diskette, a compact disk (CD) ROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (RF) link, etc.
- the computer data signal may include any signal that can propagate over a transmission medium such as electronic network channels, optical fibers, air, electromagnetic, RF links, etc.
- the code segments may be downloaded via computer networks such as the Internet, Intranet, etc.
- the machine accessible medium may be embodied in an article of manufacture.
- the machine accessible medium may include data that, when accessed by a machine, cause the machine to perform the operations described in the following.
- the machine accessible medium may also include program code embedded therein.
- the program code may include machine-readable code to perform the operations described in the following.
- data here refers to any type of information that is encoded for machine-readable purposes. Therefore, it may include program, code, data, file, etc.
- All or part of an embodiment of the invention may be implemented by hardware, software, or firmware, or any combination thereof.
- the hardware, software, or firmware element may have several modules coupled to one another.
- a hardware module is coupled to another module by mechanical, electrical, optical, electromagnetic or any physical connections.
- a software module is coupled to another module by a function, procedure, method, subprogram, or subroutine call, a jump, a link, a parameter, variable, and argument passing, a function return, etc.
- a software module is coupled to another module to receive variables, parameters, arguments, pointers, etc. and/or to generate or pass results, updated variables, pointers, etc.
- a firmware module is coupled to another module by any combination of hardware and software coupling methods above.
- a hardware, software, or firmware module may be coupled to any one of another hardware, software, or firmware module.
- a module may also be a software driver or interface to interact with the operating system running on the platform.
- a module may also be a hardware driver to configure, set up, initialize, send and receive data to and from a hardware device.
- An apparatus may include any combination of hardware, software, and firmware modules.
- One embodiment of the invention may be described as a process which is usually depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a program, a procedure, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Stored Programmes (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US81465704A | 2004-03-31 | 2004-03-31 | |
| PCT/US2005/010661 WO2005096736A2 (en) | 2004-03-31 | 2005-03-30 | Clusterization with automated deployment of a cluster-unaware application |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP1761867A2 true EP1761867A2 (de) | 2007-03-14 |
Family
ID=35125519
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP05738713A Withdrawn EP1761867A2 (de) | 2004-03-31 | 2005-03-30 | Clusterization mit automatisiertem einsatz einer cluster-unaware-anwendung |
Country Status (2)
| Country | Link |
|---|---|
| EP (1) | EP1761867A2 (de) |
| WO (1) | WO2005096736A2 (de) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8621427B2 (en) | 2010-06-30 | 2013-12-31 | International Business Machines Corporation | Code modification of rule-based implementations |
| CN104679717B (zh) * | 2015-02-15 | 2018-11-27 | 北京京东尚科信息技术有限公司 | 集群弹性部署的方法和管理系统 |
| CN116680074B (zh) * | 2023-06-06 | 2024-12-06 | 林爱珊 | 一种分布式虚拟现实系统及数据中心 |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6134673A (en) * | 1997-05-13 | 2000-10-17 | Micron Electronics, Inc. | Method for clustering software applications |
| US20030028594A1 (en) * | 2001-07-31 | 2003-02-06 | International Business Machines Corporation | Managing intended group membership using domains |
| US7234072B2 (en) * | 2003-04-17 | 2007-06-19 | Computer Associates Think, Inc. | Method and system for making an application highly available |
-
2005
- 2005-03-30 EP EP05738713A patent/EP1761867A2/de not_active Withdrawn
- 2005-03-30 WO PCT/US2005/010661 patent/WO2005096736A2/en not_active Ceased
Non-Patent Citations (1)
| Title |
|---|
| See references of WO2005096736A3 * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2005096736A2 (en) | 2005-10-20 |
| WO2005096736A3 (en) | 2007-03-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5535484B2 (ja) | 自動ソフトウェアテストフレームワーク | |
| US8800047B2 (en) | System, method and program product for dynamically performing an audit and security compliance validation in an operating environment | |
| US9292275B2 (en) | System and method for upgrading kernels in cloud computing environments | |
| US20060047776A1 (en) | Automated failover in a cluster of geographically dispersed server nodes using data replication over a long distance communication link | |
| US20200241865A1 (en) | Release orchestration for performing pre-release, version specific testing to validate application versions | |
| US9043781B2 (en) | Algorithm for automated enterprise deployments | |
| US10303458B2 (en) | Multi-platform installer | |
| US11385993B2 (en) | Dynamic integration of command line utilities | |
| US8352916B2 (en) | Facilitating the automated testing of daily builds of software | |
| US11487878B1 (en) | Identifying cooperating processes for automated containerization | |
| US8341599B1 (en) | Environments sharing remote mounted middleware | |
| US11442765B1 (en) | Identifying dependencies for processes for automated containerization | |
| US7698391B2 (en) | Performing a provisioning operation associated with a software application on a subset of the nodes on which the software application is to operate | |
| US9256509B1 (en) | Computing environment analyzer | |
| US7392148B2 (en) | Heterogeneous multipath path network test system | |
| US7472052B2 (en) | Method, apparatus and computer program product for simulating a storage configuration for a computer system | |
| US20070168728A1 (en) | Automated context-sensitive operating system switch | |
| US9342784B1 (en) | Rule based module for analyzing computing environments | |
| US20120036496A1 (en) | Plug-in based high availability application management framework (amf) | |
| US20080115134A1 (en) | Repair of system defects with reduced application downtime | |
| US7434104B1 (en) | Method and system for efficiently testing core functionality of clustered configurations | |
| Lowe | Mastering VMware vSphere 4 | |
| EP1761867A2 (de) | Clusterization mit automatisiertem einsatz einer cluster-unaware-anwendung | |
| US7152189B2 (en) | Testing distributed services by using multiple boots to timeshare a single computer | |
| US20220413887A1 (en) | Recoverable container platform cluster for testing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20061027 |
|
| AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
| AX | Request for extension of the european patent |
Extension state: AL BA HR LV MK YU |
|
| PUAK | Availability of information related to the publication of the international search report |
Free format text: ORIGINAL CODE: 0009015 |
|
| RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06F 7/00 20060101AFI20070322BHEP |
|
| DAX | Request for extension of the european patent (deleted) | ||
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
| 18W | Application withdrawn |
Effective date: 20110218 |