WO2001075677A1 - Elaboration d'une base de donnees de gestion de composants pour gerer des roles a l'aide de graphes orientes - Google Patents

Elaboration d'une base de donnees de gestion de composants pour gerer des roles a l'aide de graphes orientes Download PDF

Info

Publication number
WO2001075677A1
WO2001075677A1 PCT/US2001/010726 US0110726W WO0175677A1 WO 2001075677 A1 WO2001075677 A1 WO 2001075677A1 US 0110726 W US0110726 W US 0110726W WO 0175677 A1 WO0175677 A1 WO 0175677A1
Authority
WO
WIPO (PCT)
Prior art keywords
components
component
availability
database
hardware
Prior art date
Application number
PCT/US2001/010726
Other languages
English (en)
Inventor
Bryan Klisch
John Vogel
Original Assignee
Goahead Software Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goahead Software Inc. filed Critical Goahead Software Inc.
Priority to EP01924615A priority Critical patent/EP1287445A4/fr
Priority to US10/221,514 priority patent/US20040024732A1/en
Priority to JP2001573286A priority patent/JP2003529847A/ja
Publication of WO2001075677A1 publication Critical patent/WO2001075677A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/024Standardisation; Integration using relational databases for representation of network management data, e.g. managing via structured query language [SQL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/085Retrieval of network configuration; Tracking network configuration history
    • H04L41/0853Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information
    • H04L41/0856Retrieval of network configuration; Tracking network configuration history by actively collecting configuration information or by backing up configuration information by backing up or archiving configuration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/30Decision processes by autonomous network management units using voting and bidding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]

Definitions

  • the present invention is in the field of increasing the availability of computer systems and networked computer systems.
  • the current generation of interface busses are designed to a standard that permits the hardware components inserted into those busses to be added and removed (inserted and extracted) without having to remove power first.
  • a signal is generated noting the event.
  • the present invention provides such a solution.
  • This solution is extensible to manage and shift resources regardless of the type of hardware and software resources that reside on the system.
  • the situation becomes more complicated in a client/server networked environment, which although may have lower costs than mainframe systems, also have increasingly complicated management and support issues. These issues are increased by having multiple applications spread across multiple hardware systems. Administrators trying to keep network availability at a high level need increasingly more sophisticated tools to both monitor network performance and correct problems as they arise.
  • the present invention provides those tools.
  • the components themselves use a distributed messaging system to communicate with each other and a dynamic configuration manager that maintains the database of system components.
  • the dynamic configuration manager retrieves self-describing messages from the components in the system. These messages contain the status and interdependencies of each component. While the system is operational any change in the components status is communicated to the dynamic configuration manager that then has the responsibility to shift the resources available to it to maximize the availability (up time) of the system.
  • FIG. 1 shows a flow chart of how the component management database is used.
  • FIG. 2 shows how the component management instructions interface with the operating system in a web server.
  • FIG. 3 shows a state machine reacting to an insertion or extraction event.
  • FIG. 4 shows how the information from a component management database may be displayed.
  • FIG. 5 shows a directed graph with critical and non-critical dependencies.
  • FIG. 6 shows how information from a component management database may be used to generate an event.
  • Management software needs to intimately understand what managed components are installed in the system and their relationships.
  • the software should dynamically track the topology of these managed components as well as the individual configuration of each component.
  • the software must recognize and manage CPU modules, I/O cards, peripherals, applications, drivers, power supplies, fans and other system components. If the configuration changes, the system should take appropriate action to load or unload drivers and notify other components that may be affected by the change.
  • the interdependence of the various components on each other is vital information for building a highly available system.
  • components In order to be effectively managed, components must be represented in one centralized, standard repository.
  • This component management database should contain information on each component as well as the relationships that each component has with each other. For example, a daughterboard that plugs into an I/O card represents a parent-child relationship.
  • the component table should also be able to identify groups of components that share responsibility for a given operation.
  • the component management database should store information regarding the actions that need to be taken for 1) a component that is removed from the system, or 2) a component that is dependent on another component that has been removed from the system.
  • the system needs a mechanism for tracking these events. If the type and location of a component is fixed, the system can poll the component on a regular basis to determine its presence. However, if the type and location of the component varies, then the system needs a more intelligent way of identifying the component. In the preferred embodiment, the component should be able to identify itself to the system and describe its own capabilities, eliminating the need for the management software to have prior knowledge of the component's capabilities.
  • This mechanism is an essential enabler for hot-swap and transient components. To accomplish this, components can be enabled with publish and subscribe capabilities that register with a dynamic configuration manager. When a component is loaded or inserted into the system, it broadcasts its identity to the configuration manager.
  • the configuration manager then queries the component to determine its type and capabilities. The component is then entered into the list of managed components and appropriately monitored. Each different component or each class of component may have its own set of methods that may be called. When the component is removed, the configuration manager triggers the appropriate action. For a card, this could include unloading the drivers and transferring operation to a redundant card.
  • the management software should provide a mechanism for system components such as cards, drivers and applications to communicate with each other either within the system or with components in other systems within the cluster.
  • a distributed messaging service provides the transport for these messages. This service uses a "publish and subscribe" model. The software provides client, server and a routing functionality. To send a message, the component passes messages through the management software. When a new publisher appears, all of the subscribers are notified that a new publisher has registered. When a new subscriber appears, all the publishers are notified that a new subscriber has registered.
  • the messaging service provides a global event class and event name that enable messages to be routed across "bridged" networks. Instead of using broadcast packets that may be blocked by firewalls and routers, the messaging service sets up connections with each system. These individual system connections let the message be routed to the correct system without interference.
  • Fig. 1 Shows a high level view of how components are managed. Whenever a component is added, modified or removed 12 the component management database 14 is updated to reflect that fact. This database is constantly observed for changes that meet a predetermined criterion 16. When the component change observed does meet that criteria an event 22 may be sent (published) to those states or detectors 24 that are listening for that event (subscribing to the event). If the event being subscribed to then meets another predetermined criteria a state or detector script 26 is run. This script then has the capability to modify the component management database 14.
  • Fig. 2 shows the preferred embodiment of the invention as it may be used in a web merchant application.
  • the component management database, configuration management and role management capabilities are provided by the EMP- manager block 40.
  • the EMP embedded management process
  • the EMP has a number of APIs (Application Program Interfaces) that provides functions that a system can call to implement the component management, configuration management and role management process.
  • the applications 42 are written using those APIs.
  • Driver 46 software that provides the interface to other pieces of hardware may also be written to take advantage of functions provided by the APIs.
  • Boards 48 such as the Network Interface Cards (NIC) that are controlled by the drivers 46 can also be integrated into the component management database and managed appropriately using the predetermined operating rules.
  • NIC Network Interface Cards
  • Fig. 3 shows a state machine, which is an abstraction of the events that a component may react to.
  • a state machine may generate other actions and responses besides the ones that triggered its reaction.
  • the reaction that is generated is determined by the state that the component is in when it receives the event.
  • State SO exists whenever a card is presently inserted into the proper operating system bus.
  • event E1 occurs.
  • An instruction is sent to the component management database to set its status to "extracting”.
  • a follow-on instruction is sent to change the status of its children (the components that depend on the card for their correct operation) to "spare”.
  • Event E2 occurs when the card is extracted.
  • the state of the card is now defined as "extracted” and an instruction is sent to the database to reflect that status and a "trace” command is set.
  • the "trace” command is a piece of data that remains in memory to reflect the sequence of operations that effect the components listed in the database. It is possible to historically resurrect the history of what occurred by examining the trace events that have been logged.
  • the insertion event E3 is very similar to the extraction event whereby the instructions issued by the state now reflect its desire to be placed into the database and to issue requests that the drivers necessary to operate the card again be loaded.
  • the component requests that its database status be updated to reflect its presence and operation.
  • the configuration management database shown in Fig. 4 shows one of many ways the information residing in the database may be shown.
  • the address field 50 of the database is the global IP address of the component listed. The IP address is used to implement the fact that this information may be used not only on a specific network but also across networks using the Internet.
  • the communications protocol used to send and receive information across the networks, in the preferred embodiment, is TCP/IP.
  • the preferred API to access the TCP/IP protocol is the sockets interface.
  • IP_address An address using TCP/IP sockets consist of the Internet address (IP_address) and a port number 52.
  • the port is the entry point to an application that resides on a host 54 (a networked machine).
  • the database also gives the name of the cluster 56 (a collection of interconnected whole computers used as a single unified computing resource).
  • the management role 58 assumed by the host and the last field shown is the desired management role 60 that the system tries to obtain.
  • the protocol used is HTTP (Hypertext Transfer Protocol) which establishes a client/server connection, transmits and receives parameters including a returned file and breaks the client/server connection.
  • HTTP Hypertext Transfer Protocol
  • a copy of the component management database information is generated by a small footprint Web server and made available to other nodes in the system.
  • This web server runs on top of the operating system that is also running the component management database system.
  • Information and messages that need to be sent across the network using the TCP/IP protocol are first translated into the Extensible Markup Language (XML) using tags specifically defined to identify the parameters of the components to be monitored and controlled.
  • XML Extensible Markup Language
  • the component management database may be maintained in the dynamic memory of the processor board, and a duplicate copy may be maintained on the computer's or network's hard drive and yet another copy or copies are send using the XML markup language to the client components on the other linked networks.
  • clusters of components may be managed by running the common component management database instructions on each branch of the cluster. This allows the cluster to be centrally managed. Each branch of the cluster can find each other and communicate across the network. To make a set of these instructions into a single entity, a single cluster name and communication port is assigned to them. As soon as the system is booted up, the instructions begin to broadcast their existence to each other. Once they are all communicating, they begin to select an overall cluster manager. The cluster manager may be preselected or selected dynamically by a process of nomination and "voting". Once a cluster manager is selected then the other entities become clients of that manager. If no manager is selected, then a timing mechanism engages that selects the cluster manager from the group.
  • the managing cluster entity receives from the client entity its configuration information including among other things the communication port in which to send and receive information as to the functional status of the managed entity; the amount of time that the manager can allow between these status updates; the number of consecutive status updates that may be lost before the manager considers the client "lost”; and the event that the manager must issue when the client is determined to be "lost”.
  • This and all the other pertinent information are stored in the cluster managers database.
  • Each client also maintains a cluster database, which stores information about itself and the cluster manager.
  • the cluster manager begins normal operation including maintaining a connection with the clients, monitoring the status of the clients and routing published cluster events to the subscribing applications.
  • the clients begin their normal operation including send database information to the manager, responding to status requests, detecting if the cluster manager is lost, participating in the election of a new cluster manager if this occurs, and publishing messages to be routed by the cluster manager to the subscribing entities.
  • Fig. 5 shows three operating systems that are enabled to manage components.
  • Machine A 70 has been nominated as the manager by the machine entities B and C 72 and 74. Entities B and C are then the client entities.
  • the machines in this configuration are controlling three types of components; electronic circuit boards 76 (also known as cards), drivers 78, which are the interface between the boards and the applications, and the applications such as 80.
  • the dashed line shows a critical dependency and the solid line shows a non-critical dependency.
  • the double lines 86 show that Machine B has the capability to take over and controls board 82 if machine C 74 fails.
  • a critical dependency is one in which if one component fails because of a fault then any other component that may fail due to its dependency on the component with the fault has, what is termed, a critical dependency.
  • Board 76 has a critical dependency on the operating system (O/S) 70.
  • the double line 86 shows that the Machine B operating system can take over board 82 if the Machine C operating system fails.
  • Fig. 6 shows how the component management database may be configured to generate an event in case of a component fault.
  • IP address of the host 92 the name 94 of the host, the cluster listen port 96 defined as the network port on which the component management system sends and receives broadcast messages. This port is the same for all the component management systems in the cluster.
  • heartbeat period 98 expressed in milliseconds the inverse of which is how often the heartbeat pulse should be generated per second. Heartbeats are periodic signals sent from one component to another to show that the sending unit is still functioning correctly.
  • heartbeat port 100 which is the network port on which the component management database receives heartbeats from the cluster manager.
  • the next field is the heartbeat retries 110 which is the number of consecutive heartbeats sent to the component management system that must be lost before the cluster manager considers the client component management system to be lost.
  • the last field 120 tells the system what event to be published when the number of heartbeat retries has elapsed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer And Data Communications (AREA)
  • Debugging And Monitoring (AREA)

Abstract

L'invention porte sur un système et un procédé de suivi et de commande des divers éléments matériels et logiciels d'un système informatique et d'un ensemble lié de systèmes informatiques (70, 72, 74). Les composants sont dotés de procédés leur présentant de communiquer avec une base de données de gestion des composants utilisée à son tour par un gestionnaire de configuration (40). Les composants peuvent décrire leurs paramètres, leurs relations avec les autres composants, la mesure de leurs performances. Avec ces informations le gestionnaire de configuration peut suivre et commander les composants pour maximiser la disponibilité d'un système ou d'un réseau.
PCT/US2001/010726 2000-04-04 2001-04-02 Elaboration d'une base de donnees de gestion de composants pour gerer des roles a l'aide de graphes orientes WO2001075677A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP01924615A EP1287445A4 (fr) 2000-04-04 2001-04-02 Elaboration d'une base de donnees de gestion de composants pour gerer des roles a l'aide de graphes orientes
US10/221,514 US20040024732A1 (en) 2001-04-02 2001-04-02 Constructing a component management database for managing roles using a directed graph
JP2001573286A JP2003529847A (ja) 2000-04-04 2001-04-02 有向グラフを用いた役割管理用コンポーネント管理データベースの構築

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US19437500P 2000-04-04 2000-04-04
US60/194,375 2000-04-04

Publications (1)

Publication Number Publication Date
WO2001075677A1 true WO2001075677A1 (fr) 2001-10-11

Family

ID=22717351

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/010726 WO2001075677A1 (fr) 2000-04-04 2001-04-02 Elaboration d'une base de donnees de gestion de composants pour gerer des roles a l'aide de graphes orientes

Country Status (3)

Country Link
EP (1) EP1287445A4 (fr)
JP (1) JP2003529847A (fr)
WO (1) WO2001075677A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003039071A1 (fr) * 2001-10-29 2003-05-08 Sun Microsystems, Inc. Methode de gestion d'equipement a grande disponibilite
WO2003048994A2 (fr) * 2001-11-30 2003-06-12 Oracle International Corporation Systeme et procede de gestion active d'une entreprise de composants configurables
US7334222B2 (en) 2002-09-11 2008-02-19 International Business Machines Corporation Methods and apparatus for dependency-based impact simulation and vulnerability analysis
US7434041B2 (en) 2005-08-22 2008-10-07 Oracle International Corporation Infrastructure for verifying configuration and health of a multi-node computer system
US8615578B2 (en) 2005-10-07 2013-12-24 Oracle International Corporation Using a standby data storage system to detect the health of a cluster of data storage servers
US8640096B2 (en) 2008-08-22 2014-01-28 International Business Machines Corporation Configuration of componentized software applications

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4848392B2 (ja) * 2007-05-29 2011-12-28 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. コンピュータ構成におけるホットプラグのデバイスのクリティカリティを求める方法及びシステム
JP4740979B2 (ja) * 2007-05-29 2011-08-03 ヒューレット−パッカード デベロップメント カンパニー エル.ピー. San再構成の期間中のデバイスクリティカリティを求める方法及びシステム

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659735A (en) * 1994-12-09 1997-08-19 Object Technology Licensing Corp. Object-oriented system for program version and history database management system for various program components
US5819030A (en) * 1996-07-03 1998-10-06 Microsoft Corporation System and method for configuring a server computer for optimal performance for a particular server type
EP0827607B1 (fr) * 1995-06-14 1999-03-24 Novell, Inc. Procede de gestion de composants logiciels globalement repartis
GB2336224A (en) * 1998-04-07 1999-10-13 Northern Telecom Ltd Hardware register access and database
US5974257A (en) * 1997-07-10 1999-10-26 National Instruments Corporation Data acquisition system with collection of hardware information for identifying hardware constraints during program development
US6058445A (en) * 1997-05-13 2000-05-02 Micron Electronics, Inc. Data management method for adding or exchanging components on a running computer
US6170065B1 (en) * 1997-11-14 2001-01-02 E-Parcel, Llc Automatic system for dynamic diagnosis and repair of computer configurations

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5261044A (en) * 1990-09-17 1993-11-09 Cabletron Systems, Inc. Network management system using multifunction icons for information display
US5278977A (en) * 1991-03-19 1994-01-11 Bull Hn Information Systems Inc. Intelligent node resident failure test and response in a multi-node system
EP0637153B1 (fr) * 1993-07-30 2001-10-31 International Business Machines Corporation Méthode et appareil de décomposition automatique d'une topologie de réseau en topologie principale et sous aire
JPH07182188A (ja) * 1993-12-24 1995-07-21 Toshiba Corp コンピュータシステム
JPH07321799A (ja) * 1994-05-23 1995-12-08 Hitachi Ltd 入出力機器管理方法
US5774667A (en) * 1996-03-27 1998-06-30 Bay Networks, Inc. Method and apparatus for managing parameter settings for multiple network devices
US5832196A (en) * 1996-06-28 1998-11-03 Mci Communications Corporation Dynamic restoration process for a telecommunications network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5659735A (en) * 1994-12-09 1997-08-19 Object Technology Licensing Corp. Object-oriented system for program version and history database management system for various program components
EP0827607B1 (fr) * 1995-06-14 1999-03-24 Novell, Inc. Procede de gestion de composants logiciels globalement repartis
US5819030A (en) * 1996-07-03 1998-10-06 Microsoft Corporation System and method for configuring a server computer for optimal performance for a particular server type
US6058445A (en) * 1997-05-13 2000-05-02 Micron Electronics, Inc. Data management method for adding or exchanging components on a running computer
US5974257A (en) * 1997-07-10 1999-10-26 National Instruments Corporation Data acquisition system with collection of hardware information for identifying hardware constraints during program development
US6170065B1 (en) * 1997-11-14 2001-01-02 E-Parcel, Llc Automatic system for dynamic diagnosis and repair of computer configurations
GB2336224A (en) * 1998-04-07 1999-10-13 Northern Telecom Ltd Hardware register access and database

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1287445A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003039071A1 (fr) * 2001-10-29 2003-05-08 Sun Microsystems, Inc. Methode de gestion d'equipement a grande disponibilite
US7975016B2 (en) 2001-10-29 2011-07-05 Oracle America, Inc. Method to manage high availability equipments
JP2005512196A (ja) * 2001-11-30 2005-04-28 オラクル・インターナショナル・コーポレイション 設定可能な構成要素からなる企業をアクティブに管理するためのシステムおよび方法
WO2003048994A3 (fr) * 2001-11-30 2004-05-06 Oracle Int Corp Systeme et procede de gestion active d'une entreprise de composants configurables
CN100367211C (zh) * 2001-11-30 2008-02-06 甲骨文国际公司 有效管理企业的可配置组件的系统和方法
US7418484B2 (en) 2001-11-30 2008-08-26 Oracle International Corporation System and method for actively managing an enterprise of configurable components
AU2002365806B2 (en) * 2001-11-30 2009-05-07 Oracle International Corporation System and method for actively managing an enterprise of configurable components
WO2003048994A2 (fr) * 2001-11-30 2003-06-12 Oracle International Corporation Systeme et procede de gestion active d'une entreprise de composants configurables
US7334222B2 (en) 2002-09-11 2008-02-19 International Business Machines Corporation Methods and apparatus for dependency-based impact simulation and vulnerability analysis
US7434041B2 (en) 2005-08-22 2008-10-07 Oracle International Corporation Infrastructure for verifying configuration and health of a multi-node computer system
US8615578B2 (en) 2005-10-07 2013-12-24 Oracle International Corporation Using a standby data storage system to detect the health of a cluster of data storage servers
US8640096B2 (en) 2008-08-22 2014-01-28 International Business Machines Corporation Configuration of componentized software applications
US9092230B2 (en) 2008-08-22 2015-07-28 International Business Machines Corporation Configuration of componentized software applications

Also Published As

Publication number Publication date
EP1287445A1 (fr) 2003-03-05
JP2003529847A (ja) 2003-10-07
EP1287445A4 (fr) 2003-08-13

Similar Documents

Publication Publication Date Title
US7076691B1 (en) Robust indication processing failure mode handling
US7451359B1 (en) Heartbeat mechanism for cluster systems
US6854069B2 (en) Method and system for achieving high availability in a networked computer system
US7370223B2 (en) System and method for managing clusters containing multiple nodes
US6892316B2 (en) Switchable resource management in clustered computer system
EP1320217B1 (fr) Procédé d'installation d'agents de surveillance, système et logiciel pour surveiller des objects dans un réseau de technologie de l'information
AU2004264635B2 (en) Fast application notification in a clustered computing system
US7984453B2 (en) Event notifications relating to system failures in scalable systems
US7194652B2 (en) High availability synchronization architecture
US20030005350A1 (en) Failover management system
US7146532B2 (en) Persistent session and data in transparently distributed objects
US7093013B1 (en) High availability system for network elements
US20030158933A1 (en) Failover clustering based on input/output processors
US20030097610A1 (en) Functional fail-over apparatus and method of operation thereof
KR100423192B1 (ko) 컴퓨터에서 애플리케이션 서버의 가용성을 표시 및 결정하는 방법 및 시스템과 기록 매체
WO2001075677A1 (fr) Elaboration d'une base de donnees de gestion de composants pour gerer des roles a l'aide de graphes orientes
US20040024732A1 (en) Constructing a component management database for managing roles using a directed graph
CA2504170C (fr) Systeme en grappes et procede d'interconnexion
US8036105B2 (en) Monitoring a problem condition in a communications system
US7769844B2 (en) Peer protocol status query in clustered computer system
White et al. Design of an Autonomic Element for Server Management
WO2006029714A2 (fr) Procede et agencement informatique permettant de commander une pluralite de serveurs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref country code: JP

Ref document number: 2001 573286

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 2001924615

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001924615

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 10221514

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 2001924615

Country of ref document: EP