US20050192971A1 - System and method for restricting data transfers and managing software components of distributed computers - Google Patents

System and method for restricting data transfers and managing software components of distributed computers Download PDF

Info

Publication number
US20050192971A1
US20050192971A1 US11/112,412 US11241205A US2005192971A1 US 20050192971 A1 US20050192971 A1 US 20050192971A1 US 11241205 A US11241205 A US 11241205A US 2005192971 A1 US2005192971 A1 US 2005192971A1
Authority
US
United States
Prior art keywords
computer
node
bmonitor
recited
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/112,412
Inventor
Bassam Tabbara
Galen Hunt
Aamer Hydrie
Steven Levi
David Stutz
Robert Welland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US11/112,412 priority Critical patent/US20050192971A1/en
Publication of US20050192971A1 publication Critical patent/US20050192971A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • This invention relates to computer system management. More particularly, the invention relates to restricting data transfers and managing software components of distributed computers.
  • One significant way in which the Internet is used is the World Wide Web (also referred to as the “web”), which is a collection of documents (referred to as “web pages”) that users can view or otherwise render and which typically include links to one or more other pages that the user can access.
  • web pages documents
  • Web pages are typically made available on the web via one or more web servers, a process referred to as “hosting” the web pages. Sometimes these web pages are freely available to anyone that requests to view them (e.g., a company's advertisements) and other times access to the web pages is restricted (e.g., a password may be necessary to access the web pages). Given the large number of people that may be requesting to view the web pages (especially in light of the global accessibility to the web), a large number of servers may be necessary to adequately host the web pages (e.g., the same web page can be hosted on multiple servers to increase the number of people that can access the web page concurrently).
  • a co-location facility refers to a complex that can house multiple servers.
  • the co-location facility typically provides a reliable Internet connection, a reliable power supply, and proper operating environment.
  • the co-location facility also typically includes multiple secure areas (e.g., cages) into which different companies can situate their servers.
  • the collection of servers that a particular company situates at the co-location facility is referred to as a “server cluster”, even though in fact there may only be a single server at any individual co-location facility. The particular company is then responsible for managing the operation of the servers in their server cluster.
  • co-location facilities also present problems.
  • One problem is data security. Different companies (even competitors) can have server clusters at the same co-location facility. Care is required, in such circumstances, to ensure that data received from the Internet (or sent by a server in the server cluster) that is intended for one company is not routed to a server of another company situated at the co-location facility.
  • An additional problem is the management of the servers once they are placed in the co-location facility.
  • a system administrator from a company is able to contact a co-location facility administrator (typically by telephone) and ask him or her to reset a particular server (typically by pressing a hardware reset button on the server, or powering off then powering on the server) in the event of a failure of (or other problem with) the server.
  • This limited reset-only ability provides very little management functionality to the company.
  • the system administrator from the company can physically travel to the co-location facility him/her-self and attend to the faulty server.
  • a significant amount of time can be wasted by the system administrator in traveling to the co-location facility to attend to a server.
  • PCs personal computers
  • PDAs personal digital assistants
  • pocket computers palm-sized computers
  • handheld computers digital cellular phones
  • management of the software on these user computers can be very laborious and time consuming and is particularly difficult for the often non-technical users of these machines.
  • a system administrator or technician must either travel to the remote location of the user's computer, or walk through management operations over a telephone. It would be further beneficial to have an improved way to manage remote computers at the user's location without user intervention.
  • a controller (referred to as the “BMonitor”) is situated on a computer (e.g., each node in a co-location facility).
  • the BMonitor includes a plurality of filters that identify where data can be sent to and/or received from, such as another node in the co-location facility or a client computer coupled to the computer via the Internet. These filters can then be modified, during operation of the computer, by one or more management devices coupled to the computer.
  • a controller referred to as the “BMonitor” (situated on a computer) manages software components executing on that computer. Requests are received by the BMonitor from external sources and implemented by the BMonitor. Such requests can originate from a management console local to the computer or alternatively remote from the computer.
  • a controller referred to as the “BMonitor” (situated on a computer) operates as a trusted third party mediating interaction among multiple management devices.
  • the BMonitor maintains multiple ownership domains, each corresponding to a management device(s) and each having a particular set of rights that identify what types of management functions they can command the BMonitor to carry out. Only one ownership domain is the top-level domain at any particular time, and the top-level domain has a more expanded set of rights than any of the lower-level domains.
  • the top-level domain can create new ownership domains corresponding to other management device, and can also be removed and the management rights of its corresponding management device revoked at any time by a management device corresponding to a lower-level ownership domain.
  • the computer's system memory can be erased so that no confidential information from one ownership domain is made available to devices corresponding to other ownership domains.
  • the BMonitor is implemented in a more-privileged level than other software engines executing on the node, preventing other software engines from interfering with restrictions imposed by the BMonitor.
  • FIG. 1 shows a client/server network system and environment such as may be used with certain embodiments of the invention.
  • FIG. 2 shows a general example of a computer that can be used in accordance with certain embodiments of the invention.
  • FIG. 3 is a block diagram illustrating an exemplary co-location facility in more detail.
  • FIG. 4 is a block diagram illustrating an exemplary multi-tiered server cluster management architecture.
  • FIG. 5 is a block diagram illustrating an exemplary node of a co-location facility in more detail in accordance with certain embodiments of the invention.
  • FIG. 6 is a block diagram illustrating an exemplary set of ownership domains in accordance with certain embodiments of the invention.
  • FIG. 7 is a flow diagram illustrating the general operation of a BMonitor in accordance with certain embodiments of the invention.
  • FIG. 8 is a flowchart illustrating an exemplary process for handling outbound data requests in accordance with certain embodiments of the invention.
  • FIG. 9 is a flowchart illustrating an exemplary process for handling inbound data requests in accordance with certain embodiments of the invention.
  • FIG. 1 shows a client/server network system and environment such as may be used with certain embodiments of the invention.
  • the system includes one or more (n) client computers 102 , one or more (m) co-location facilities 104 each including multiple clusters of server computers (server clusters) 106 , one or more management devices 110 , and one or more separate (e.g., not included in a co-location facility) servers 112 .
  • the servers, clients, and management devices communicate with each other over a data communications network 108 .
  • the communications network in FIG. 1 comprises a public network 108 such as the Internet. Other types of communications networks might also be used, in addition to or in place of the Internet, including local area networks (LANs), wide area networks (WANs), etc.
  • Data communications network 108 can be implemented in any of a variety of different manners, including wired and/or wireless communications media.
  • Communication over network 108 can be carried out using any of a wide variety of communications protocols.
  • client computers 102 and server computers in clusters 106 can communicate with one another using the Hypertext Transfer Protocol (HTTP), in which web pages are hosted by the server computers and written in a markup language, such as the Hypertext Markup Language (HTML) or the eXtensible Markup Language (XML).
  • HTTP Hypertext Transfer Protocol
  • HTML Hypertext Markup Language
  • XML eXtensible Markup Language
  • Management device 110 operates to manage software components of one or more computing devices located at a location remote from device 110 . This management may also include restricting data transfers into and/or out of the computing device being managed. In the illustrated example of FIG. 1 , management device 110 can remotely manage any one or more of: a client(s) 102 , a server cluster(s) 106 , or a server(s) 112 . Any of a wide variety of computing devices can be remotely managed, including personal computers (PCs), network PCs, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, gaming consoles, Internet appliances, personal digital assistants (PDAs), pocket computers, palm-sized computers, handheld computers, digital cellular phones, etc. Remote management of a computing device is accomplished by communicating commands to the device via network 108 , as discussed in more detail below.
  • PCs personal computers
  • network PCs multiprocessor systems
  • microprocessor-based or programmable consumer electronics minicomputers
  • mainframe computers mainframe computers
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules may be located in both local and remote memory storage devices.
  • embodiments of the invention can be implemented in hardware or a combination of hardware, software, and/or firmware.
  • all or part of the invention can be implemented in one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs).
  • ASICs application specific integrated circuits
  • PLDs programmable logic devices
  • FIG. 2 shows a general example of a computer 142 that can be used in accordance with certain embodiments of the invention.
  • Computer 142 is shown as an example of a computer that can perform the functions of a client computer 102 of FIG. 1 , a server computer or node in a co-location facility 104 of FIG. 1 , a management device 110 of FIG. 1 , a server 112 of FIG. 1 , or a local or remote management console as discussed in more detail below.
  • Computer 142 includes one or more processors or processing units 144 , a system memory 146 , and a bus 148 that couples various system components including the system memory 146 to processors 144 .
  • the bus 148 represents one
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules may be located in both local and remote memory storage devices.
  • embodiments of the invention can be implemented in hardware or a combination of hardware, software, and/or firmware.
  • all or part of the invention can be implemented in one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs).
  • ASICs application specific integrated circuits
  • PLDs programmable logic devices
  • FIG. 2 shows a general example of a computer 142 that can be used in accordance with certain embodiments of the invention.
  • Computer 142 is shown as an example of a computer that can perform the functions of a client computer 102 of FIG. 1 , a server computer or node in a co-location facility 104 of FIG. 1 , a management device 110 of FIG. 1 , a server 112 of FIG. 1 , or a local or remote management console as discussed in more detail below.
  • Computer 142 includes one or more processors or processing units 144 , a system memory 146 , and a bus 148 that couples various system components including the system memory 146 to processors 144 .
  • the bus 148 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • the system memory includes read only memory (ROM) 150 and random access memory (RAM) 152 .
  • ROM read only memory
  • RAM random access memory
  • a basic input/output system (BIOS) 154 containing the basic routines that help to transfer information between elements within computer 142 , such as during start-up, is stored in ROM 150 .
  • Computer 142 further includes a hard disk drive 156 for reading from and writing to a hard disk, not shown, connected to bus 148 via a hard disk driver interface 157 (e.g., a SCSI, ATA, or other type of interface); a magnetic disk drive 158 for reading from and writing to a removable magnetic disk 160 , connected to bus 148 via a magnetic disk drive interface 161 ; and an optical disk drive 162 for reading from or writing to a removable optical disk 164 such as a CD ROM, DVD, or other optical media, connected to bus 148 via an optical drive interface 165 .
  • the drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for computer 142 .
  • exemplary environment described herein employs a hard disk, a removable magnetic disk 160 and a removable optical disk 164 , it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs) read only memories (ROM), and the like, may also be used in the exemplary operating environment.
  • RAMs random access memories
  • ROM read only memories
  • a number of program modules may be stored on the hard disk, magnetic disk 160 , optical disk 164 , ROM 150 , or RAM 152 , including an operating system 170 , one or more application programs 172 , other program modules 174 , and program data 176 .
  • a user may enter commands and information into computer 142 through input devices such as keyboard 178 and pointing device 180 .
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are connected to the processing unit 144 through an interface 168 that is coupled to the system bus.
  • a monitor 184 or other type of display device is also connected to the system bus 148 via an interface, such as a video adapter 186 .
  • personal computers typically include other peripheral output devices (not shown) such as speakers and printers.
  • Computer 142 optionally operates in a networked environment using logical connections to one or more remote computers, such as a remote computer 188 .
  • the remote computer 188 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer 142 , although only a memory storage device 190 has been illustrated in FIG. 2 .
  • the logical connections depicted in FIG. 2 include a local area network (LAN) 192 and a wide area network (WAN) 194 .
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • remote computer 188 executes an Internet Web browser program (which may optionally be integrated into the operating system 170 ) such as the “Internet Explorer” Web browser manufactured and distributed by Microsoft Corporation of Redmond, Wash.
  • computer 142 When used in a LAN networking environment, computer 142 is connected to the local network 192 through a network interface or adapter 196 . When used in a WAN networking environment, computer 142 typically includes a modem 198 or other component for establishing communications over the wide area network 194 , such as the Internet.
  • the modem 198 which may be internal or external, is connected to the system bus 148 via an interface (e.g., a serial port interface 168 ).
  • program modules depicted relative to the personal computer 142 may be stored in the remote memory storage device. It is to be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • the data processors of computer 142 are programmed by means of instructions stored at different times in the various computer-readable storage media of the computer.
  • Programs and operating systems are typically distributed, for example, on floppy disks or CD-ROMs. From there, they are installed or loaded into the secondary memory of a computer. At execution, they are loaded at least partially into the computer's primary electronic memory.
  • the invention described herein includes these and other various types of computer-readable storage media when such media contain instructions or programs for implementing the steps described below in conjunction with a microprocessor or other data processor.
  • the invention also includes the computer itself when programmed according to the methods and techniques described below.
  • certain sub-components of the computer may be programmed to perform the functions and steps described below. The invention includes such sub-components when they are programmed as described.
  • the invention described herein includes data structures, described below, as embodied on various types of memory media.
  • programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer, and are executed by the data processor(s) of the computer.
  • FIG. 3 is a block diagram illustrating an exemplary co-location facility in more detail.
  • Co-location facility 104 is illustrated including multiple nodes (also referred to as server computers) 210 .
  • Co-location facility 104 can include any number of nodes 210 , and can easily include an amount of nodes numbering into the thousands.
  • the nodes 210 are grouped together in clusters, referred to as server clusters (or node clusters). For ease of explanation and to avoid cluttering the drawings, only a single cluster 212 is illustrated in FIG. 3 .
  • Each server cluster includes nodes 210 that correspond to a particular customer of co-location facility 104 .
  • the nodes 210 of a server cluster can be physically isolated from the nodes 210 of other server clusters. This physical isolation can take different forms, such as separate locked cages or separate rooms at co-location facility 104 . Physically isolating server clusters ensures customers of co-location facility 104 that only they can physically access their nodes (other customers cannot).
  • a landlord/tenant relationship (also referred to as a lessor/lessee relationship) can also be established based on the nodes 210 .
  • the owner (and/or operator) of co-location facility 104 owns (or otherwise has rights to) the individual nodes 210 , and thus can be viewed as a “landlord”.
  • the customers of co-location facility 104 lease the nodes 210 from the landlord, and thus can be viewed as a “tenant”.
  • the landlord is typically not concerned with what types of data or programs are being stored at the nodes 210 by the tenant, but does impose boundaries on the clusters that prevent nodes 210 from different clusters from communicating with one another, as discussed in more detail below. Additionally, the nodes 210 provide assurances to the tenant that, although the nodes are only leased to the tenant, the landlord cannot access confidential information stored by the tenant.
  • nodes 210 of different clusters are often physically coupled to the same transport medium (or media) 211 that enables access to network connection(s) 216 , and possibly application operations management console 242 , discussed in more detail below.
  • This transport medium can be wired or wireless.
  • each node 210 can be coupled to a shared transport medium 211 , each node 210 is configurable to restrict which other nodes 210 data can be sent to or received from.
  • a customer's (also referred to as tenant's) server cluster the customer may want to be able to pass data between different nodes 210 within the cluster for processing, storage, etc.
  • the customer will typically not want data to be passed to other nodes 210 that are not in the server cluster.
  • Configuring each node 210 in the cluster to restrict which other nodes 210 data can be sent to or received from allows a boundary for the server cluster to be established and enforced. Establishment and enforcement of such server cluster boundaries prevents customer data from being erroneously or improperly forwarded to a node that is not part of the cluster.
  • the customer prevent communication between nodes 210 of different customers, thereby ensuring that each customer's data can be passed to other nodes 210 of that customer.
  • the customer itself may also further define sub-boundaries within its cluster, establishing sub-clusters of nodes 210 that data cannot be communicated out of (or in to) either to or from other nodes in the cluster.
  • the customer is able to add, modify, remove, etc. such sub-cluster boundaries at will, but only within the boundaries defined by the landlord (that is, the cluster boundaries). Thus, the customer is not able to alter boundaries in a manner that would allow communication to or from a node 210 to extend to another node 210 that is not within the same cluster.
  • Co-location facility 104 supplies reliable power 214 and reliable network connection(s) 216 (e.g., to network 108 of FIG. 1 ) to each of the nodes 210 .
  • Power 214 and network connection(s) 216 are shared by all of the nodes 210 , although alternatively separate power 214 and network connection(s) 216 may be supplied to nodes 210 or groupings (e.g., clusters) of nodes.
  • Any of a wide variety of conventional mechanisms for supplying reliable power can be used to supply reliable power 214 , such as power received from a public utility company along with backup generators in the event of power failures, redundant generators, batteries, fuel cells, or other power storage mechanisms, etc.
  • any of a wide variety of conventional mechanisms for supplying a reliable network connection can be used to supply network connection(s) 216 , such as redundant connection transport media, different types of connection media, different access points (e.g., different Internet access points, different Internet service providers (ISPs), etc.).
  • redundant connection transport media different types of connection media
  • different access points e.g., different Internet access points, different Internet service providers (ISPs), etc.
  • nodes 210 are leased or sold to customers by the operator or owner of co-location facility 104 along with the space (e.g., locked cages) and service (e.g., access to reliable power 214 and network connection(s) 216 ) at facility 104 .
  • space and service at facility 104 may be leased to customers while one or more nodes are supplied by the customer.
  • FIG. 4 is a block diagram illustrating an exemplary multi-tiered management architecture.
  • the multi-tiered architecture includes three tiers: a cluster operations management tier 230 , an application operations management tier 232 , and an application development tier 234 .
  • Cluster operations management tier 230 is implemented locally at the same location as the server(s) being managed (e.g., at a co-location facility) and involves managing the hardware operations of the server(s).
  • cluster operations management tier 230 is not concerned with what software components are executing on the nodes 210 , but only with the continuing operation of the hardware of nodes 210 and establishing any boundaries between clusters of nodes.
  • the application operations management tier 232 is implemented at a remote location other than where the server(s) being managed are located (e.g., other than the co-location facility), but from a client computer that is still communicatively coupled to the server(s).
  • the application operations management tier 232 involves managing the software operations of the server(s) and defining any sub-boundaries within server clusters.
  • the client can be coupled to the server(s) in any of a variety of manners, such as via the Internet or via a dedicated (e.g., dial-up) connection.
  • the client can be coupled continually to the server(s), or alternatively sporadically (e.g., only when needed for management purposes).
  • the application development tier 234 is implemented on another client computer at a location other than the server(s) (e.g., other than at the co-location facility) and involves development of software components or engines for execution on the server(s).
  • server(s) e.g., other than at the co-location facility
  • current software on a node 210 at co-location facility 104 could be accessed by a remote client to develop additional software components or engines for the node.
  • the client at which application development tier 234 is implemented is typically a different client than that at which application operations management tier 232 is implemented, tiers 232 and 234 could be implemented (at least in part) on the same client.
  • the multi-tiered architecture could include different numbers of tiers.
  • the application operations management tier may be separated into two tiers, each having different (or overlapping) responsibilities, resulting in a 4-tiered architecture.
  • the management at these tiers may occur from the same place (e.g., a single application operations management console may be shared), or alternatively from different places (e.g., two different operations management consoles).
  • co-location facility 104 includes a cluster operations management console for each server cluster.
  • cluster operations management console 240 corresponds to cluster 212 and may be, for example, a management device 110 of FIG. 1 .
  • Cluster operations management console 240 implements cluster operations management tier 230 ( FIG. 4 ) for cluster 212 and is responsible for managing the hardware operations of nodes 210 in cluster 212 .
  • Cluster operations management console 240 monitors the hardware in cluster 212 and attempts to identify hardware failures. Any of a wide variety of hardware failures can be monitored for, such as processor failures, bus failures, memory failures, etc.
  • Hardware operations can be monitored in any of a variety of manners, such as cluster operations management console 240 sending test messages or control signals to the nodes 210 that require the use of particular hardware in order to respond (no response or an incorrect response indicates failure), having messages or control signals that require the use of particular hardware to generate periodically sent by nodes 210 to cluster operations management console 240 (not receiving such a message or control signal within a specified amount of time indicates failure), etc.
  • cluster operations management console 240 may make no attempt to identify what type of hardware failure has occurred, but rather simply that a failure has occurred.
  • cluster operations management console 240 acts to correct the failure.
  • the action taken by cluster operations management console 240 can vary based on the hardware as well as the type of failure, and can vary for different server clusters.
  • the corrective action can be notification of an administrator (e.g., a flashing light, an audio alarm, an electronic mail message, calling a cell phone or pager, etc.), or an attempt to physically correct the problem (e.g., reboot the node, activate another backup node to take its place, etc.).
  • Cluster operations management console 240 also establishes cluster boundaries within co-location facility 104 .
  • the cluster boundaries established by console 240 prevent nodes 210 in one cluster (e.g., cluster 212 ) from communicating with nodes in another cluster (e.g., any node not in cluster 212 ), while at the same time not interfering with the ability of nodes 210 within a cluster from communicating with other nodes within that cluster.
  • These boundaries provide security for the tenants' data, allowing them to know that their data cannot be communicated to other tenants' nodes 210 at facility 104 even though network connection 216 may be shared by the tenants.
  • each cluster of co-location facility 104 includes a dedicated cluster operations management console.
  • a single cluster operations management console may correspond to, and manage hardware operations of, multiple server clusters.
  • multiple cluster operations management consoles may correspond to, and manage hardware operations of, a single server cluster.
  • Such multiple consoles can manage a single server cluster in a shared manner, or one console may operate as a backup for another console (e.g., providing increased reliability through redundancy, to allow for maintenance, etc.).
  • An application operations management console 242 is also communicatively coupled to co-location facility 104 .
  • Application operations management console 242 may be, for example, a management device 110 of FIG. 1 .
  • Application operations management console 242 is located at a location remote from co-location facility 104 (that is, not within co-location facility 104 ), typically being located at the offices of the customer.
  • a different application operations management console 242 corresponds to each server cluster of co-location facility 104 , although alternatively multiple consoles 242 may correspond to a single server cluster, or a single console 242 may correspond to multiple server clusters.
  • Application operations management console 240 implements application operations management tier 232 ( FIG. 4 ) for cluster 212 and is responsible for managing the software operations of nodes 210 in cluster 212 as well as securing sub-boundaries within cluster 212 .
  • Application operations management console 242 monitors the software in cluster 212 and attempts to identify software failures. Any of a wide variety of software failures can be monitored for, such as application processes or threads that are “hung” or otherwise non-responsive, an error in execution of application processes or threads, etc.
  • Software operations can be monitored in any of a variety of manners (similar to the monitoring of hardware operations discussed above), such as application operations management console 242 sending test messages or control signals to particular processes or threads executing on the nodes 210 that require the use of particular routines in order to respond (no response or an incorrect response indicates failure), having messages or control signals that require the use of particular software routines to generate periodically sent by processes or threads executing on nodes 210 to application operations management console 242 (not receiving such a message or control signal within a specified amount of time indicates failure), etc.
  • application operations management console 242 may make no attempt to identify what type of software failure has occurred, but rather simply that a failure has occurred.
  • application operations management console 242 acts to correct the failure.
  • the action taken by application operations management console 242 can vary based on the hardware as well as the type of failure, and can vary for different server clusters.
  • the corrective action can be notification of an administrator (e.g., a flashing light, an audio alarm, an electronic mail message, calling a cell phone or pager, etc.), or an attempt to correct the problem (e.g., reboot the node, re-load the software component or engine image, terminate and re-execute the process, etc.).
  • the management of a node 210 is distributed across multiple managers, regardless of the number of other nodes (if any) situated at the same location as the node 210 .
  • the multi-tiered management allows the hardware operations management to be separated from the application operations management, allowing two different consoles (each under the control of a different entity) to share the management responsibility for the node.
  • FIG. 5 is a block diagram illustrating an exemplary remotely managed node in more detail in accordance with certain embodiments of the invention.
  • Node 248 can be a node 210 of a co-location facility, or alternatively a separate device (e.g., a client 102 or server 112 of FIG. 1 ).
  • Node 248 includes a monitor 250 , referred to as the “BMonitor”, and a plurality of software components or engines 252 , and is coupled to (or alternatively incorporates) a mass storage device 262 .
  • node 248 is a computing device having a processor(s) that supports multiple privilege levels (e.g., rings in an ⁇ 86 architecture processor).
  • these privilege levels are referred to as rings, although alternate implementations using different processor architectures may use different nomenclature.
  • the multiple rings provide a set of prioritized levels that software can execute at, often including 4 levels (Rings 0 , 1 , 2 , and 3 ).
  • Ring 0 is typically referred to as the most privileged ring.
  • Software processes executing in Ring 0 can typically access more features (e.g., instructions) than processes executing in less privileged Rings.
  • a processor executing in a particular Ring cannot alter code or data in a higher priority ring.
  • BMonitor 250 executes in Ring 0
  • engines 252 execute in Ring 1 (or alternatively Rings 2 and/or 3 ).
  • BMonitor 250 (executing in Ring 0 ) cannot be altered directly by engines 252 (executing in Ring 1 ). Rather, any such alterations would have to be made by an engine 252 requesting BMonitor 250 to make the alteration (e.g., by sending a message to BMonitor 250 , invoking a function of BMonitor 250 , etc.).
  • BMonitor 250 in Ring 0 protects BMonitor 250 from a rogue or malicious engine 252 that tries to bypass any restrictions imposed by BMonitor 250 .
  • BMonitor 250 may be implemented in other manners that protect it from a rogue or malicious engine 252 .
  • node 248 may include multiple processors—one (or more) processor(s) for executing engines 252 , and another processor(s) to execute BMonitor 250 .
  • BMonitor 250 can be effectively shielded from engines 252 .
  • BMonitor 250 is the fundamental control module of node 248 —it controls (and optionally includes) both the network interface card and the memory manager. By controlling the network interface card (which may be separate from BMonitor 250 , or alternatively BMonitor 250 may be incorporated on the network interface card), BMonitor 250 can control data received by and sent by node 248 . By controlling the memory manager, BMonitor 250 controls the allocation of memory to engines 252 executing in node 248 and thus can assist in preventing rogue or malicious engines from interfering with the operation of BMonitor 250 .
  • BMonitor 250 e.g., the network interface card
  • BMonitor 250 still makes at least part of such functionality available to engines 252 executing on the node 248 .
  • BMonitor 250 provides an interface (e.g., via controller 254 discussed in more detail below) via which engines 252 can request access to the functionality, such as to send data out to another node 248 within a co-location facility or on the Internet. These requests can take any of a variety of forms, such as sending messages, calling a function, etc.
  • BMonitor 250 includes controller 254 , network interface 256 , one or more filters 258 , one or more keys 259 , and a BMonitor Control Protocol (BMCP) module 260 .
  • Network interface 256 provides the interface between node 248 and the network (e.g., network 108 of FIG. 1 ).
  • Filters 258 identify other nodes 248 in a co-location facility (and/or other sources or targets (e.g., coupled to Internet 108 of FIG. 1 ) that data can (or alternatively cannot) be sent to and/or received from.
  • the nodes or other sources/targets can be identified in any of a wide variety of manners, such as by network address (e.g., Internet Protocol (IP) address), some other globally unique identifier, a locally unique identifier (e.g., a numbering scheme proprietary or local to co-location facility 104 ), etc.
  • IP Internet Protocol
  • Filters 258 can fully restrict access to a node (e.g., no data can be received from or sent to the node), or partially restrict access to a node. Partial access restriction can take different forms. For example, a node may be restricted so that data can be received from the node but not sent to the node (or vice versa). By way of another example, a node may be restricted so that only certain types of data (e.g., communications in accordance with certain protocols, such as HTTP) can be received from and/or sent to the node. Filtering based on particular types of data can be implemented in different manners, such as by communicating data in packets with header information that indicate the type of data included in the packet.
  • Partial access restriction can take different forms. For example, a node may be restricted so that data can be received from the node but not sent to the node (or vice versa). By way of another example, a node may be restricted so that only certain types of data (e.g., communications in accordance with certain protocols, such
  • Filters 258 can be added by one or more management devices 110 of FIG. 1 or either of application operations management console 242 or cluster operations management console 240 of FIG. 3 .
  • filters added by cluster operations management console 240 (to establish cluster boundaries) restrict full access to nodes (e.g., any access to another node can be prevented) whereas filters added by application operations management console 242 (to establish sub-boundaries within a cluster) or management device 110 can restrict either full access to nodes or partial access.
  • Controller 254 also imposes some restrictions on what filters can be added to filters 258 .
  • controller 254 allows cluster operations management console 240 to add any filters it desires (which will define the boundaries of the cluster). However, controller 254 restricts application operations management console 242 to adding only filters that are at least as restrictive as those added by console 240 . If console 242 attempts to add a filter that is less restrictive than those added by console 240 (in which case the sub-boundary may extend beyond the cluster boundaries), controller 254 refuses to add the filter (or alternatively may modify the filter so that it is not less restrictive). By imposing such a restriction, controller 254 can ensure that the sub-boundaries established at the application operations management level do not extend beyond the cluster boundaries established at the cluster operations management level.
  • Controller 254 uses one or more filters 258 , operates to restrict data packets sent from node 248 and/or received by node 248 . All data intended for an engine 252 , or sent by an engine 252 , to another node, is passed through network interface 256 and filters 258 . Controller 254 applies the filters 258 to the data, comparing the target of the data (e.g., typically identified in a header portion of a packet including the data) to acceptable (and/or restricted) nodes (and/or network addresses) identified in filters 258 . If filters 258 indicate that the target of the data is acceptable, then controller 254 allows the data to pass through to the target (either into node 248 or out from node 248 ).
  • the target of the data e.g., typically identified in a header portion of a packet including the data
  • acceptable nodes and/or network addresses
  • controller 254 prevents the data from passing through to the target. Controller 254 may return an indication to the source of the data that the data cannot be passed to the target, or may simply ignore or discard the data.
  • filters 258 allows the boundary restrictions of a server cluster ( FIG. 3 ) to be imposed.
  • Filters 258 can be programmed (e.g., by application operations management console 242 of FIG. 3 ) with the node addresses of all the nodes within the server cluster (e.g., cluster 212 ). Controller 254 then prevents data received from any node not within the server cluster from being passed through to an engine 252 , and similarly prevents any data being sent to a node other than one within the server cluster from being sent. Similarly, data received from Internet 108 ( FIG.
  • server cluster boundaries can be easily changed to accommodate changes in the server cluster (e.g., addition of nodes to and/or removal of nodes from the server cluster).
  • BMCP module 260 implements the Distributed Host Control Protocol (DHCP), allowing BMonitor 250 (and thus node 248 ) to obtain an IP address from a DHCP server (e.g., cluster operations management console 240 of FIG. 3 ). During an initialization process for node 248 , BMCP module 260 requests an IP address from the DHCP server, which in turn provides the IP address to module 260 . Additional information regarding DHCP is available from Microsoft Corporation of Redmond, Wash.
  • DHCP Distributed Host Control Protocol
  • Software engines 252 include any of a wide variety of conventional software components. Examples of engines 252 include an operating system (e.g., Windows NT®), a load balancing server component (e.g., to balance the processing load of multiple nodes 248 ), a caching server component (e.g., to cache data and/or instructions from another node 248 or received via the Internet), a storage manager component (e.g., to manage storage of data from another node 248 or received via the Internet), etc. In one implementation, each of the engines 252 is a protocol-based engine, communicating with BMonitor 250 and other engines 252 via messages and/or function calls without requiring the engines 252 and BMonitor 250 to be written using the same programming language.
  • an operating system e.g., Windows NT®
  • a load balancing server component e.g., to balance the processing load of multiple nodes 248
  • a caching server component e.g., to cache data and/or instructions from another node 248 or received
  • Controller 254 in conjunction with loader 264 , is responsible for controlling the execution of engines 252 . This control can take different forms, including beginning or initiating execution of an engine 252 , terminating execution of an engine 252 , re-loading an image of an engine 252 from a storage device, debugging execution of an engine 252 , etc. Controller 254 receives instructions from application operations management console 242 of FIG. 3 or a management device(s) 110 of FIG. 1 regarding which of these control actions to take and when to take them.
  • controller 254 communicates with loader 264 to load an image of the engine 252 from a storage device (e.g., device 262 , ROM, etc.) into the memory (e.g., RAM) of node 248 .
  • Loader 264 operates in a conventional manner to copy the image of the engine from the storage device into memory and initialize any necessary operating system parameters to allow execution of the engine 252 .
  • the control of engines 252 is actually managed by a remote device, not locally at the same location as the node 248 being managed.
  • Controller 254 also provides an interface via which application operations management console 242 of FIG. 3 or a management device(s) 110 of FIG. 1 can identify filters to add (and/or remove) from filter set 258 .
  • Controller 254 also includes an interface via which cluster operations management console 240 of FIG. 3 can communicate commands to controller 254 .
  • Different types of hardware operation oriented commands can be communicated to controller 254 by cluster operations management console 240 , such as re-booting the node, shutting down the node, placing the node in a low-power state (e.g., in a suspend or standby state), changing cluster boundaries, changing encryption keys (if any), etc.
  • Controller 254 further optionally provides encryption support for BMonitor 250 , allowing data to be stored securely on mass storage device 262 (e.g., a magnetic disk, an optical disk, etc.) and secure communications to occur between node 248 and an operations management console (e.g., console 240 or 242 of FIG. 3 ) or other management device (e.g., management device 110 of FIG. 1 ).
  • mass storage device 262 e.g., a magnetic disk, an optical disk, etc.
  • an operations management console e.g., console 240 or 242 of FIG. 3
  • other management device e.g., management device 110 of FIG. 1 .
  • Controller 254 maintains multiple encryption keys 259 , which can include a variety of different keys such as symmetric keys (secret keys used in secret key cryptography), public/private key pairs (for public key cryptography), etc. to be used in encrypting and/or decrypting data.
  • symmetric keys secret keys used in secret key cryptography
  • public/private key pairs for public key cryptography
  • BMonitor 250 makes use of public key cryptography to provide secure communications between node 248 and the management consoles (e.g., consoles 240 or 242 of FIG. 3 ) or other management devices (e.g., management device(s) 110 of FIG. 1 ).
  • Public key cryptography is based on a key pair, including both a public key and a private key, and an encryption algorithm.
  • the encryption algorithm can encrypt data based on the public key such that it cannot be decrypted efficiently without the private key.
  • communications from the public-key holder can be encrypted using the public key, allowing only the private-key holder to decrypt the communications.
  • Any of a variety of public key cryptography techniques may be used, such as the well-known RSA (Rivest, Shamir, and Adelman) encryption technique.
  • RSA Rivest, Shamir, and Adelman
  • BMonitor 250 is initialized to include a public/private key pair for both the landlord and the tenant. These key pairs can be generated by BMonitor 250 , or alternatively by some other component and stored within BMonitor 250 (with that other component being trusted to destroy its knowledge of the key pair).
  • U refers to a public key
  • R refers to a private key.
  • the public/private key pair for the landlord is referred to as (U L , R L )
  • the public/private key pair for the tenant is referred to as (U T , R T ).
  • BMonitor 250 makes the public keys U L and U T available to the landlord, but keeps the private keys R L and R T secret.
  • BMonitor 250 never divulges the private keys R L and R T , so both the landlord and the tenant can be assured that no entity other than the BMonitor 250 can decrypt information that they encrypt using their public keys (e.g., via cluster operations management console 240 and application operations management console 242 of FIG. 3 , respectively).
  • the landlord can assign node 248 to a particular tenant, giving that tenant the public key U T .
  • Use of the public key U T allows the tenant to encrypt communications to BMonitor 250 that only BMonitor 250 can decrypt (using the private key R T ).
  • a prudent initial step for the tenant is to request that BMonitor 250 generate a new public/private key pair (U T , R T ).
  • controller 254 or a dedicated key generator (not shown) of BMonitor 250 generates a new public/private key pair in any of a variety of well-known manners, stores the new key pair as the tenant key pair, and returns the new public key U T to the tenant.
  • the tenant is assured that no other entity, including the landlord, is aware of the tenant public key U T . Additionally, the tenant may also have new key pairs generated at subsequent times.
  • BMonitor 250 also maintains, as one of keys 259 , a disk key which is generated based on one or more symmetric keys (symmetric keys refer to secret keys used in secret key cryptography).
  • the disk key also a symmetric key, is used by BMonitor 250 to store information in mass storage device 262 .
  • BMonitor 250 keeps the disk key secure, using it only to encrypt data node stored on mass storage device 262 and decrypt data node retrieved from mass storage device 262 (thus there is no need for any other entities, including any management device, to have knowledge of the disk key).
  • BMonitor 250 uses the disk key to encrypt data to be stored on mass storage device 262 regardless of the source of the data.
  • the data may come from a client device (e.g., client 102 of FIG. 1 ) used by a customer of the tenant, from a management device (e.g., a device 110 of FIG. 1 or a console 240 or 242 of FIG. 3 ), etc.
  • the disk key is generated by combining the storage keys corresponding to each management device.
  • the storage keys can be combined in a variety of different manners, and in one implementation are combined by using one of the keys to encrypt the other key, with the resultant value being encrypted by another one of the keys, etc.
  • BMonitor 250 operates as a trusted third party mediating interaction among multiple mutually distrustful management agents that share responsibility for managing node 248 .
  • the landlord and tenant for node 248 do not typically filly trust one another.
  • BMonitor 250 thus operates as a trusted third party, allowing the lessor and lessee of node 248 to trust that information made available to BMonitor 250 by a particular entity or agent is accessible only to that entity or agent, and no other (e.g., confidential information given by the lessor is not accessible to the lessee, and vice versa).
  • BMonitor 250 uses a set of layered ownership domains (ODs) to assist in creating this trust.
  • ODs layered ownership domains
  • An ownership domain is the basic unit of authentication and rights in BMonitor 250 , and each managing entity or agent (e.g., the lessor and the lessee) corresponds to a separate ownership domain (although each managing entity may have multiple management devices from which it can exercise its managerial responsibilities).
  • FIG. 6 is a block diagram illustrating an exemplary set of ownership domains in accordance with certain embodiments of the invention.
  • Multiple ( ⁇ ) ownership domains 280 , 282 , and 284 are organized as an ownership domain stack 286 .
  • Each ownership domain 280 - 284 corresponds to a particular managerial level and one or more management devices (e.g., device(s) 110 of FIG. 1 , consoles 240 and 242 of FIG. 3 , etc.).
  • the base or root ownership domain 280 corresponds to the actual owner of the node, such as the landlord discussed above.
  • the next lowest ownership domain 282 corresponds to the entity that the owner of the hardware leases the hardware to (e.g., the tenant discussed above).
  • a management device in a particular ownership domain can set up another ownership domain for another management device that is higher on ownership domain stack 286 .
  • the entity that the node is leased to can set up another ownership domain for another entity (e.g., to set up a cluster of nodes implementing a database cluster).
  • ownership domain stack 286 When a new ownership domain is created, it is pushed on top of ownership domain stack 286 . It remains the top-level ownership domain until either it creates another new ownership domain or its rights are revoked.
  • An ownership domain's rights can be revoked by a device in any lower-level ownership domain on ownership domain stack 286 , at which point the ownership domain is popped from (removed from) stack 286 along with any other higher-level ownership domains. For example, if the owner of node 248 (ownership domain 280 ) were to revoke the rights of ownership domain 282 , then ownership domains 282 and 284 would be popped from ownership domain stack 286 .
  • Each ownership domain has a corresponding set of rights.
  • the top-level ownership domain has one set of rights that include: (1) the right to push new ownership domains on the ownership domain stack; (2) the right to access any system memory in the node; (3) the right to access any mass storage devices in or coupled to the node; (4) the right to modify (add, remove, or change) packet filters at the node; (5) the right to start execution of software engines on the node (e.g., engines 252 of FIG.
  • the right to stop execution of software engines on the node including resetting the node; (7) the right to debug software engines on the node; (8) the right to change its own authentication credentials (e.g., its public key or ID); (9) the right to modify its own storage key; (10) the right to subscribe to events engine events, machine events, and/or packet filter events (e.g., notify a management console or other device when one of these events occurs).
  • the right to change its own authentication credentials e.g., its public key or ID
  • the right to modify its own storage key e.g., the right to subscribe to events engine events, machine events, and/or packet filter events (e.g., notify a management console or other device when one of these events occurs).
  • each of the lower-level ownership domains has another set of rights that include: (1) the right to pop an existing ownership domain(s); (2) the right to modify (add, remove, or change) packet filters at the node; (3) the right to change its own authentication credentials (e.g., public key or ID); and (4) the right to subscribe to machine events and/or packet filter events.
  • some of these rights may not be included (e.g., depending on the situation, the right to debug software engines on the node may not be needed), or other rights may be included (e.g., the top-level node may include the right to pop itself off the ownership domain stack).
  • Ownership domains can be added to and removed from ownership domain stack 286 numerous times during operation. Which ownership domains are removed and/or added varies based on the activities being performed. By way of example, if the owner of node 248 (corresponding to root ownership domain 280 ) desires to perform some operation on node 248 , all higher-level ownership domains 282 - 284 are revoked, the desired operation is performed (ownership domain 280 is now the top-level domain, so the expanded set of rights are available), and then new ownership domains can be created and added to ownership domain stack 286 (e.g., so that the management agent previously corresponding to the top-level ownership domain is returned to its previous position).
  • BMonitor 250 checks, for each request received from an entity corresponding to one of the ownership domains (e.g., a management console controlled by the entity), what rights the ownership domain has. If the ownership domain has the requisite rights for the request to be implemented, then BMonitor 250 carries out the request. However, if the ownership domain does not have the requisite set of rights, then the request is not carried out (e.g., an indication that the request cannot be carried out can be returned to the requester, or alternatively the request can simply be ignored).
  • each ownership domain includes an identifier (ID), a public key, and a storage key.
  • ID identifier
  • public key is used to send secure communications to a management device corresponding to the ownership domain
  • storage key is used (at least in part) to encrypt information stored on mass storage devices.
  • An additional private key may also be included for each ownership domain for the management device corresponding to the ownership domain to send secure communications to the BMonitor.
  • the root ownership domain 280 may also be initialized to include the storage key (and a private key), or alternatively it may be added later (e.g., generated by BMonitor 250 , communicated to BMonitor 250 from a management console, etc.). Similarly, each time a new ownership domain is created, the ownership domain that creates the new ownership domain communicates an ID and public key to BMonitor 250 for the new ownership domain. A storage key (and a private key) may also be created for the new ownership domain when the new ownership domain is created, or alternatively at a later time.
  • BMonitor 250 authenticates a management device(s) corresponding to each of the ownership domains. BMonitor does not accept any commands from a management device until it is authenticated, and only reveals confidential information (e.g., encryption keys) for a particular ownership domain to a management device(s) that can authenticate itself as corresponding to that ownership domain. This authentication process can occur multiple times during operation of the node, allowing the management devices for one or more ownership domains to change over time. The authentication of management devices can occur in a variety of different manners.
  • BMonitor 250 when a management device requests a connection to BMonitor 250 and asserts that it corresponds to a particular ownership domain, BMonitor 250 generates a token (e.g., a random number), encrypts the token with the public key of the ownership domain, and then sends the encrypted token to the requesting management device. Upon receipt of the encrypted token, the management device decrypts the token using its private key, and then returns the decrypted token to BMonitor 250 . If the returned token matches the token that BMonitor 250 generated, then the authenticity of the management device is verified (because only the management device with the corresponding private key would be able to decrypt the token). An analogous process can be used for BMonitor 250 to authenticate itself to the management device.
  • a token e.g., a random number
  • the management device can communicate requests to BMonitor 250 and have any of those requests carried out (assuming it has the rights to do so). Although not required, it is typically prudent for a management console, upon initially authenticating itself to BMonitor 250 , to change its public key/private key pair.
  • the management device that is creating the new ownership domain can optionally terminate any executing engines 252 and erase any system memory and mass storage devices. This provides an added level of security, on top of the encryption, to ensure that one management device does not have access to information stored on the hardware by another management device. Additionally, each time an ownership domain is popped from the stack, BMonitor 250 terminated any executing engines 252 , erases the system memory, and also erases the storage key for that ownership domain. Thus, any information stored by that ownership domain cannot be accessed by the remaining ownership domains—the memory has been erased so there is no data in memory, and without the storage key information on the mass storage device cannot be decrypted. BMonitor 250 may alternatively erase the mass storage device too. However, by simply erasing the key and leaving the data encrypted, BMonitor 250 allows the data to be recovered if the popped ownership domain is re-created (and uses the same storage key).
  • FIG. 7 is a flow diagram illustrating the general operation of BMonitor 250 in accordance with certain embodiments of the invention.
  • BMonitor 250 monitors the inputs it receives (block 290 ). These inputs can be from a variety of different sources, such as another node 248 , a client computer via network connection 216 ( FIG. 3 ), client operations management console 240 , application operations management console 242 , an engine 252 , a management device 110 ( FIG. 1 ), etc.
  • the received request is a control request (e.g., from one of consoles 240 or 242 of FIG. 1 , or a management device(s) 110 of FIG. 1 )
  • a check is made (based on the top-level ownership domain) as to whether the requesting device has the necessary rights for the request (block 292 ). If the requesting device does not have the necessary rights, then BMonitor 250 returns to monitoring inputs (block 290 ) without implementing the request. However, if the requesting device has the necessary rights, then the request is implemented (block 294 ), and BMonitor 250 continues to monitor the inputs it receives (block 290 ).
  • BMonitor 250 either accepts or rejects the request (act 296 ), and continues to monitor the inputs it receives (block 290 ). Whether BMonitor 250 accepts a request is dependent on the filters 258 ( FIG. 5 ), as discussed above.
  • FIG. 8 is a flowchart illustrating an exemplary process for handling outbound data requests in accordance with certain embodiments of the invention.
  • the process of FIG. 8 is implemented by BMonitor 250 of FIG. 5 , and may be performed in software.
  • the process of FIG. 8 is discussed with additional reference to components in FIGS. 1, 3 and 5 .
  • Controller 254 compares the request to outbound request restrictions (act 302 ). This comparison is accomplished by accessing information corresponding to the data (e.g., information in a header of a packet that includes the data or information inherent in the data, such as the manner (e.g., which of multiple function calls is used) in which the data request was provided to BMonitor 250 ) to the outbound request restrictions maintained by filters 258 . This comparison allows BMonitor 250 to determine whether it is permissible to pass the outbound data request to the target (act 304 ). For example, if filters 258 indicate which targets data cannot be sent to, then it is permissible to pass the outbound data request to the target only if the target identifier is not identified in filters 258 .
  • BMonitor 250 sends the request to the target (act 306 ). For example, BMonitor 250 can transmit the request to the appropriate target via transport medium 211 (and possibly network connection 216 ), or via another connection to network 108 . However, if it is not permissible to pass the outbound request to the target, then BMonitor 250 rejects the request (act 308 ). BMonitor 250 may optionally transmit an indication to the source of the request that it was rejected, or alternatively may simply drop the request.
  • FIG. 9 is a flowchart illustrating an exemplary process for handling inbound data requests in accordance with certain embodiments of the invention.
  • the process of FIG. 9 is implemented by BMonitor 250 of FIG. 5 , and may be performed in software.
  • the process of FIG. 9 is discussed with additional reference to components in FIG. 5 .
  • Controller 254 compares the request to inbound request restrictions (act 312 ). This comparison is accomplished by accessing information corresponding to the data to the inbound request restrictions maintained by filters 258 . This comparison allows BMonitor 250 to determine whether it is permissible for any of software engines 252 to receive the data request (act 314 ). For example, if filters 258 indicate which sources data can be received from, then it is permissible for an engine 252 to receive the data request only if the source of the data is identified in filters 258 .
  • BMonitor 250 forwards the request to the targeted engine(s) 252 (act 316 ). However, if it is not permissible to receive the inbound data request from the source, then BMonitor 250 rejects the request (act 318 ). BMonitor 250 may optionally transmit an indication to the source of the request that it was rejected, or alternatively may simply drop the request.

Abstract

A controller, referred to as the “BMonitor”, is situated on a computer. The BMonitor includes a plurality of filters that identify where data can be sent to and/or received from, such as another node in a co-location facility or a client computer coupled to the computer via the Internet. The BMonitor further receives and implements requests from external sources regarding the management of software components executing on the computer, allowing such external sources to initiate, terminate, debug, etc. software components on the computer. Additionally, the BMonitor operates as a trusted third party mediating interaction among multiple external sources managing the computer.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 09/695,820, filed Oct. 24, 2000, entitled “System and Method for Restricting Data Transfers and Managing Software Components of Distributed Computers”, which is hereby incorporated by reference herein.
  • TECHNICAL FIELD
  • This invention relates to computer system management. More particularly, the invention relates to restricting data transfers and managing software components of distributed computers.
  • BACKGROUND OF THE INVENTION
  • The Internet and its use have expanded greatly in recent years, and this expansion is expected to continue. One significant way in which the Internet is used is the World Wide Web (also referred to as the “web”), which is a collection of documents (referred to as “web pages”) that users can view or otherwise render and which typically include links to one or more other pages that the user can access. Many businesses and individuals have created a presence on the web, typically consisting of one or more web pages describing themselves, describing their products or services, identifying other information of interest, allowing goods or services to be purchased, etc.
  • Web pages are typically made available on the web via one or more web servers, a process referred to as “hosting” the web pages. Sometimes these web pages are freely available to anyone that requests to view them (e.g., a company's advertisements) and other times access to the web pages is restricted (e.g., a password may be necessary to access the web pages). Given the large number of people that may be requesting to view the web pages (especially in light of the global accessibility to the web), a large number of servers may be necessary to adequately host the web pages (e.g., the same web page can be hosted on multiple servers to increase the number of people that can access the web page concurrently). Additionally, because the web is geographically distributed and has non-uniformity of access, it is often desirable to distribute servers to diverse remote locations in order to minimize access times for people in diverse locations of the world. Furthermore, people tend to view web pages around the clock (again, especially in light of the global accessibility to the web), so servers hosting web pages should be kept functional 24 hours per day.
  • Managing a large number of servers, however, can be difficult. A reliable power supply is necessary to ensure the servers can run. Physical security is necessary to ensure that a thief or other mischievous person does not attempt to damage or steal the servers. A reliable Internet connection is required to ensure that the access requests will reach the servers. A proper operating environment (e.g., temperature, humidity, etc.) is required to ensure that the servers operate properly. Thus, “co-location facilities” have evolved which assist companies in handling these difficulties.
  • A co-location facility refers to a complex that can house multiple servers. The co-location facility typically provides a reliable Internet connection, a reliable power supply, and proper operating environment. The co-location facility also typically includes multiple secure areas (e.g., cages) into which different companies can situate their servers. The collection of servers that a particular company situates at the co-location facility is referred to as a “server cluster”, even though in fact there may only be a single server at any individual co-location facility. The particular company is then responsible for managing the operation of the servers in their server cluster.
  • Such co-location facilities, however, also present problems. One problem is data security. Different companies (even competitors) can have server clusters at the same co-location facility. Care is required, in such circumstances, to ensure that data received from the Internet (or sent by a server in the server cluster) that is intended for one company is not routed to a server of another company situated at the co-location facility.
  • An additional problem is the management of the servers once they are placed in the co-location facility. Currently, a system administrator from a company is able to contact a co-location facility administrator (typically by telephone) and ask him or her to reset a particular server (typically by pressing a hardware reset button on the server, or powering off then powering on the server) in the event of a failure of (or other problem with) the server. This limited reset-only ability provides very little management functionality to the company. Alternatively, the system administrator from the company can physically travel to the co-location facility him/her-self and attend to the faulty server. Unfortunately, a significant amount of time can be wasted by the system administrator in traveling to the co-location facility to attend to a server. Thus, it would be beneficial to have an improved way to manage server computers at a co-location facility.
  • Additionally, the world is becoming populated with ever increasing numbers of individual user computers in the form of personal computers (PCs), personal digital assistants (PDAs), pocket computers, palm-sized computers, handheld computers, digital cellular phones, etc. Management of the software on these user computers can be very laborious and time consuming and is particularly difficult for the often non-technical users of these machines. Often a system administrator or technician must either travel to the remote location of the user's computer, or walk through management operations over a telephone. It would be further beneficial to have an improved way to manage remote computers at the user's location without user intervention.
  • The invention described below addresses these disadvantages, restricting data transfers and managing software components of distributed computers.
  • SUMMARY OF THE INVENTION
  • Restricting data transfers and managing software components in clusters of server computers located at a co-location facility is described herein.
  • According to one aspect, a controller (referred to as the “BMonitor”) is situated on a computer (e.g., each node in a co-location facility). The BMonitor includes a plurality of filters that identify where data can be sent to and/or received from, such as another node in the co-location facility or a client computer coupled to the computer via the Internet. These filters can then be modified, during operation of the computer, by one or more management devices coupled to the computer.
  • According to another aspect, a controller referred to as the “BMonitor” (situated on a computer) manages software components executing on that computer. Requests are received by the BMonitor from external sources and implemented by the BMonitor. Such requests can originate from a management console local to the computer or alternatively remote from the computer.
  • According to another aspect, a controller referred to as the “BMonitor” (situated on a computer) operates as a trusted third party mediating interaction among multiple management devices. The BMonitor maintains multiple ownership domains, each corresponding to a management device(s) and each having a particular set of rights that identify what types of management functions they can command the BMonitor to carry out. Only one ownership domain is the top-level domain at any particular time, and the top-level domain has a more expanded set of rights than any of the lower-level domains. The top-level domain can create new ownership domains corresponding to other management device, and can also be removed and the management rights of its corresponding management device revoked at any time by a management device corresponding to a lower-level ownership domain. Each time a change of which ownership domain is the top-level ownership domain occurs, the computer's system memory can be erased so that no confidential information from one ownership domain is made available to devices corresponding to other ownership domains.
  • According to another aspect, the BMonitor is implemented in a more-privileged level than other software engines executing on the node, preventing other software engines from interfering with restrictions imposed by the BMonitor.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings. The same numbers are used throughout the figures to reference like components and/or features.
  • FIG. 1 shows a client/server network system and environment such as may be used with certain embodiments of the invention.
  • FIG. 2 shows a general example of a computer that can be used in accordance with certain embodiments of the invention.
  • FIG. 3 is a block diagram illustrating an exemplary co-location facility in more detail.
  • FIG. 4 is a block diagram illustrating an exemplary multi-tiered server cluster management architecture.
  • FIG. 5 is a block diagram illustrating an exemplary node of a co-location facility in more detail in accordance with certain embodiments of the invention.
  • FIG. 6 is a block diagram illustrating an exemplary set of ownership domains in accordance with certain embodiments of the invention.
  • FIG. 7 is a flow diagram illustrating the general operation of a BMonitor in accordance with certain embodiments of the invention.
  • FIG. 8 is a flowchart illustrating an exemplary process for handling outbound data requests in accordance with certain embodiments of the invention.
  • FIG. 9 is a flowchart illustrating an exemplary process for handling inbound data requests in accordance with certain embodiments of the invention.
  • DETAILED DESCRIPTION
  • FIG. 1 shows a client/server network system and environment such as may be used with certain embodiments of the invention. Generally, the system includes one or more (n) client computers 102, one or more (m) co-location facilities 104 each including multiple clusters of server computers (server clusters) 106, one or more management devices 110, and one or more separate (e.g., not included in a co-location facility) servers 112. The servers, clients, and management devices communicate with each other over a data communications network 108. The communications network in FIG. 1 comprises a public network 108 such as the Internet. Other types of communications networks might also be used, in addition to or in place of the Internet, including local area networks (LANs), wide area networks (WANs), etc. Data communications network 108 can be implemented in any of a variety of different manners, including wired and/or wireless communications media.
  • Communication over network 108 can be carried out using any of a wide variety of communications protocols. In one implementation, client computers 102 and server computers in clusters 106 can communicate with one another using the Hypertext Transfer Protocol (HTTP), in which web pages are hosted by the server computers and written in a markup language, such as the Hypertext Markup Language (HTML) or the eXtensible Markup Language (XML).
  • Management device 110 operates to manage software components of one or more computing devices located at a location remote from device 110. This management may also include restricting data transfers into and/or out of the computing device being managed. In the illustrated example of FIG. 1, management device 110 can remotely manage any one or more of: a client(s) 102, a server cluster(s) 106, or a server(s) 112. Any of a wide variety of computing devices can be remotely managed, including personal computers (PCs), network PCs, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, gaming consoles, Internet appliances, personal digital assistants (PDAs), pocket computers, palm-sized computers, handheld computers, digital cellular phones, etc. Remote management of a computing device is accomplished by communicating commands to the device via network 108, as discussed in more detail below.
  • In the discussion herein, embodiments of the invention are described in the general context of computer-executable instructions, such as program modules, being executed by one or more conventional personal computers. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that various embodiments of the invention may be practiced with other computer system configurations, including hand-held devices, gaming consoles, Internet appliances, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. In a distributed computer environment, program modules may be located in both local and remote memory storage devices.
  • Alternatively, embodiments of the invention can be implemented in hardware or a combination of hardware, software, and/or firmware. For example, all or part of the invention can be implemented in one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs).
  • FIG. 2 shows a general example of a computer 142 that can be used in accordance with certain embodiments of the invention. Computer 142 is shown as an example of a computer that can perform the functions of a client computer 102 of FIG. 1, a server computer or node in a co-location facility 104 of FIG. 1, a management device 110 of FIG. 1, a server 112 of FIG. 1, or a local or remote management console as discussed in more detail below.
  • Computer 142 includes one or more processors or processing units 144, a system memory 146, and a bus 148 that couples various system components including the system memory 146 to processors 144. The bus 148 represents one
  • In the discussion herein, embodiments of the invention are described in the general context of computer-executable instructions, such as program modules, being executed by one or more conventional personal computers. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that various embodiments of the invention may be practiced with other computer system configurations, including hand-held devices, gaming consoles, Internet appliances, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. In a distributed computer environment, program modules may be located in both local and remote memory storage devices.
  • Alternatively, embodiments of the invention can be implemented in hardware or a combination of hardware, software, and/or firmware. For example, all or part of the invention can be implemented in one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs).
  • FIG. 2 shows a general example of a computer 142 that can be used in accordance with certain embodiments of the invention. Computer 142 is shown as an example of a computer that can perform the functions of a client computer 102 of FIG. 1, a server computer or node in a co-location facility 104 of FIG. 1, a management device 110 of FIG. 1, a server 112 of FIG. 1, or a local or remote management console as discussed in more detail below.
  • Computer 142 includes one or more processors or processing units 144, a system memory 146, and a bus 148 that couples various system components including the system memory 146 to processors 144. The bus 148 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. The system memory includes read only memory (ROM) 150 and random access memory (RAM) 152. A basic input/output system (BIOS) 154, containing the basic routines that help to transfer information between elements within computer 142, such as during start-up, is stored in ROM 150.
  • Computer 142 further includes a hard disk drive 156 for reading from and writing to a hard disk, not shown, connected to bus 148 via a hard disk driver interface 157 (e.g., a SCSI, ATA, or other type of interface); a magnetic disk drive 158 for reading from and writing to a removable magnetic disk 160, connected to bus 148 via a magnetic disk drive interface 161; and an optical disk drive 162 for reading from or writing to a removable optical disk 164 such as a CD ROM, DVD, or other optical media, connected to bus 148 via an optical drive interface 165. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for computer 142. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 160 and a removable optical disk 164, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs) read only memories (ROM), and the like, may also be used in the exemplary operating environment.
  • A number of program modules may be stored on the hard disk, magnetic disk 160, optical disk 164, ROM 150, or RAM 152, including an operating system 170, one or more application programs 172, other program modules 174, and program data 176. A user may enter commands and information into computer 142 through input devices such as keyboard 178 and pointing device 180. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are connected to the processing unit 144 through an interface 168 that is coupled to the system bus. A monitor 184 or other type of display device is also connected to the system bus 148 via an interface, such as a video adapter 186. In addition to the monitor, personal computers typically include other peripheral output devices (not shown) such as speakers and printers.
  • Computer 142 optionally operates in a networked environment using logical connections to one or more remote computers, such as a remote computer 188. The remote computer 188 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to computer 142, although only a memory storage device 190 has been illustrated in FIG. 2. The logical connections depicted in FIG. 2 include a local area network (LAN) 192 and a wide area network (WAN) 194. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet. In the described embodiment of the invention, remote computer 188 executes an Internet Web browser program (which may optionally be integrated into the operating system 170) such as the “Internet Explorer” Web browser manufactured and distributed by Microsoft Corporation of Redmond, Wash.
  • When used in a LAN networking environment, computer 142 is connected to the local network 192 through a network interface or adapter 196. When used in a WAN networking environment, computer 142 typically includes a modem 198 or other component for establishing communications over the wide area network 194, such as the Internet. The modem 198, which may be internal or external, is connected to the system bus 148 via an interface (e.g., a serial port interface 168). In a networked environment, program modules depicted relative to the personal computer 142, or portions thereof, may be stored in the remote memory storage device. It is to be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Generally, the data processors of computer 142 are programmed by means of instructions stored at different times in the various computer-readable storage media of the computer. Programs and operating systems are typically distributed, for example, on floppy disks or CD-ROMs. From there, they are installed or loaded into the secondary memory of a computer. At execution, they are loaded at least partially into the computer's primary electronic memory. The invention described herein includes these and other various types of computer-readable storage media when such media contain instructions or programs for implementing the steps described below in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described below. Furthermore, certain sub-components of the computer may be programmed to perform the functions and steps described below. The invention includes such sub-components when they are programmed as described. In addition, the invention described herein includes data structures, described below, as embodied on various types of memory media.
  • For purposes of illustration, programs and other executable program components such as the operating system are illustrated herein as discrete blocks, although it is recognized that such programs and components reside at various times in different storage components of the computer, and are executed by the data processor(s) of the computer.
  • FIG. 3 is a block diagram illustrating an exemplary co-location facility in more detail. Co-location facility 104 is illustrated including multiple nodes (also referred to as server computers) 210. Co-location facility 104 can include any number of nodes 210, and can easily include an amount of nodes numbering into the thousands.
  • The nodes 210 are grouped together in clusters, referred to as server clusters (or node clusters). For ease of explanation and to avoid cluttering the drawings, only a single cluster 212 is illustrated in FIG. 3. Each server cluster includes nodes 210 that correspond to a particular customer of co-location facility 104. The nodes 210 of a server cluster can be physically isolated from the nodes 210 of other server clusters. This physical isolation can take different forms, such as separate locked cages or separate rooms at co-location facility 104. Physically isolating server clusters ensures customers of co-location facility 104 that only they can physically access their nodes (other customers cannot).
  • A landlord/tenant relationship (also referred to as a lessor/lessee relationship) can also be established based on the nodes 210. The owner (and/or operator) of co-location facility 104 owns (or otherwise has rights to) the individual nodes 210, and thus can be viewed as a “landlord”. The customers of co-location facility 104 lease the nodes 210 from the landlord, and thus can be viewed as a “tenant”. The landlord is typically not concerned with what types of data or programs are being stored at the nodes 210 by the tenant, but does impose boundaries on the clusters that prevent nodes 210 from different clusters from communicating with one another, as discussed in more detail below. Additionally, the nodes 210 provide assurances to the tenant that, although the nodes are only leased to the tenant, the landlord cannot access confidential information stored by the tenant.
  • Although physically isolated, nodes 210 of different clusters are often physically coupled to the same transport medium (or media) 211 that enables access to network connection(s) 216, and possibly application operations management console 242, discussed in more detail below. This transport medium can be wired or wireless.
  • As each node 210 can be coupled to a shared transport medium 211, each node 210 is configurable to restrict which other nodes 210 data can be sent to or received from. Given that a number of different nodes 210 may be included in a customer's (also referred to as tenant's) server cluster, the customer may want to be able to pass data between different nodes 210 within the cluster for processing, storage, etc. However, the customer will typically not want data to be passed to other nodes 210 that are not in the server cluster. Configuring each node 210 in the cluster to restrict which other nodes 210 data can be sent to or received from allows a boundary for the server cluster to be established and enforced. Establishment and enforcement of such server cluster boundaries prevents customer data from being erroneously or improperly forwarded to a node that is not part of the cluster.
  • These initial boundaries established by the landlord prevent communication between nodes 210 of different customers, thereby ensuring that each customer's data can be passed to other nodes 210 of that customer. The customer itself may also further define sub-boundaries within its cluster, establishing sub-clusters of nodes 210 that data cannot be communicated out of (or in to) either to or from other nodes in the cluster. The customer is able to add, modify, remove, etc. such sub-cluster boundaries at will, but only within the boundaries defined by the landlord (that is, the cluster boundaries). Thus, the customer is not able to alter boundaries in a manner that would allow communication to or from a node 210 to extend to another node 210 that is not within the same cluster.
  • Co-location facility 104 supplies reliable power 214 and reliable network connection(s) 216 (e.g., to network 108 of FIG. 1) to each of the nodes 210. Power 214 and network connection(s) 216 are shared by all of the nodes 210, although alternatively separate power 214 and network connection(s) 216 may be supplied to nodes 210 or groupings (e.g., clusters) of nodes. Any of a wide variety of conventional mechanisms for supplying reliable power can be used to supply reliable power 214, such as power received from a public utility company along with backup generators in the event of power failures, redundant generators, batteries, fuel cells, or other power storage mechanisms, etc. Similarly, any of a wide variety of conventional mechanisms for supplying a reliable network connection can be used to supply network connection(s) 216, such as redundant connection transport media, different types of connection media, different access points (e.g., different Internet access points, different Internet service providers (ISPs), etc.).
  • In certain embodiments, nodes 210 are leased or sold to customers by the operator or owner of co-location facility 104 along with the space (e.g., locked cages) and service (e.g., access to reliable power 214 and network connection(s) 216) at facility 104. In other embodiments, space and service at facility 104 may be leased to customers while one or more nodes are supplied by the customer.
  • Management of each node 210 is carried out in a multiple-tiered manner. FIG. 4 is a block diagram illustrating an exemplary multi-tiered management architecture. The multi-tiered architecture includes three tiers: a cluster operations management tier 230, an application operations management tier 232, and an application development tier 234. Cluster operations management tier 230 is implemented locally at the same location as the server(s) being managed (e.g., at a co-location facility) and involves managing the hardware operations of the server(s). In the illustrated example, cluster operations management tier 230 is not concerned with what software components are executing on the nodes 210, but only with the continuing operation of the hardware of nodes 210 and establishing any boundaries between clusters of nodes.
  • The application operations management tier 232, on the other hand, is implemented at a remote location other than where the server(s) being managed are located (e.g., other than the co-location facility), but from a client computer that is still communicatively coupled to the server(s). The application operations management tier 232 involves managing the software operations of the server(s) and defining any sub-boundaries within server clusters. The client can be coupled to the server(s) in any of a variety of manners, such as via the Internet or via a dedicated (e.g., dial-up) connection. The client can be coupled continually to the server(s), or alternatively sporadically (e.g., only when needed for management purposes).
  • The application development tier 234 is implemented on another client computer at a location other than the server(s) (e.g., other than at the co-location facility) and involves development of software components or engines for execution on the server(s). Alternatively, current software on a node 210 at co-location facility 104 could be accessed by a remote client to develop additional software components or engines for the node. Although the client at which application development tier 234 is implemented is typically a different client than that at which application operations management tier 232 is implemented, tiers 232 and 234 could be implemented (at least in part) on the same client.
  • Although only three tiers are illustrated in FIG. 4, alternatively the multi-tiered architecture could include different numbers of tiers. For example, the application operations management tier may be separated into two tiers, each having different (or overlapping) responsibilities, resulting in a 4-tiered architecture. The management at these tiers may occur from the same place (e.g., a single application operations management console may be shared), or alternatively from different places (e.g., two different operations management consoles).
  • Returning to FIG. 3, co-location facility 104 includes a cluster operations management console for each server cluster. In the example of FIG. 3, cluster operations management console 240 corresponds to cluster 212 and may be, for example, a management device 110 of FIG. 1. Cluster operations management console 240 implements cluster operations management tier 230 (FIG. 4) for cluster 212 and is responsible for managing the hardware operations of nodes 210 in cluster 212. Cluster operations management console 240 monitors the hardware in cluster 212 and attempts to identify hardware failures. Any of a wide variety of hardware failures can be monitored for, such as processor failures, bus failures, memory failures, etc. Hardware operations can be monitored in any of a variety of manners, such as cluster operations management console 240 sending test messages or control signals to the nodes 210 that require the use of particular hardware in order to respond (no response or an incorrect response indicates failure), having messages or control signals that require the use of particular hardware to generate periodically sent by nodes 210 to cluster operations management console 240 (not receiving such a message or control signal within a specified amount of time indicates failure), etc. Alternatively, cluster operations management console 240 may make no attempt to identify what type of hardware failure has occurred, but rather simply that a failure has occurred.
  • Once a hardware failure is detected, cluster operations management console 240 acts to correct the failure. The action taken by cluster operations management console 240 can vary based on the hardware as well as the type of failure, and can vary for different server clusters. The corrective action can be notification of an administrator (e.g., a flashing light, an audio alarm, an electronic mail message, calling a cell phone or pager, etc.), or an attempt to physically correct the problem (e.g., reboot the node, activate another backup node to take its place, etc.).
  • Cluster operations management console 240 also establishes cluster boundaries within co-location facility 104. The cluster boundaries established by console 240 prevent nodes 210 in one cluster (e.g., cluster 212) from communicating with nodes in another cluster (e.g., any node not in cluster 212), while at the same time not interfering with the ability of nodes 210 within a cluster from communicating with other nodes within that cluster. These boundaries provide security for the tenants' data, allowing them to know that their data cannot be communicated to other tenants' nodes 210 at facility 104 even though network connection 216 may be shared by the tenants.
  • In the illustrated example, each cluster of co-location facility 104 includes a dedicated cluster operations management console. Alternatively, a single cluster operations management console may correspond to, and manage hardware operations of, multiple server clusters. According to another alternative, multiple cluster operations management consoles may correspond to, and manage hardware operations of, a single server cluster. Such multiple consoles can manage a single server cluster in a shared manner, or one console may operate as a backup for another console (e.g., providing increased reliability through redundancy, to allow for maintenance, etc.).
  • An application operations management console 242 is also communicatively coupled to co-location facility 104. Application operations management console 242 may be, for example, a management device 110 of FIG. 1. Application operations management console 242 is located at a location remote from co-location facility 104 (that is, not within co-location facility 104), typically being located at the offices of the customer. A different application operations management console 242 corresponds to each server cluster of co-location facility 104, although alternatively multiple consoles 242 may correspond to a single server cluster, or a single console 242 may correspond to multiple server clusters. Application operations management console 240 implements application operations management tier 232 (FIG. 4) for cluster 212 and is responsible for managing the software operations of nodes 210 in cluster 212 as well as securing sub-boundaries within cluster 212.
  • Application operations management console 242 monitors the software in cluster 212 and attempts to identify software failures. Any of a wide variety of software failures can be monitored for, such as application processes or threads that are “hung” or otherwise non-responsive, an error in execution of application processes or threads, etc. Software operations can be monitored in any of a variety of manners (similar to the monitoring of hardware operations discussed above), such as application operations management console 242 sending test messages or control signals to particular processes or threads executing on the nodes 210 that require the use of particular routines in order to respond (no response or an incorrect response indicates failure), having messages or control signals that require the use of particular software routines to generate periodically sent by processes or threads executing on nodes 210 to application operations management console 242 (not receiving such a message or control signal within a specified amount of time indicates failure), etc. Alternatively, application operations management console 242 may make no attempt to identify what type of software failure has occurred, but rather simply that a failure has occurred.
  • Once a software failure is detected, application operations management console 242 acts to correct the failure. The action taken by application operations management console 242 can vary based on the hardware as well as the type of failure, and can vary for different server clusters. The corrective action can be notification of an administrator (e.g., a flashing light, an audio alarm, an electronic mail message, calling a cell phone or pager, etc.), or an attempt to correct the problem (e.g., reboot the node, re-load the software component or engine image, terminate and re-execute the process, etc.).
  • Thus, the management of a node 210 is distributed across multiple managers, regardless of the number of other nodes (if any) situated at the same location as the node 210. The multi-tiered management allows the hardware operations management to be separated from the application operations management, allowing two different consoles (each under the control of a different entity) to share the management responsibility for the node.
  • FIG. 5 is a block diagram illustrating an exemplary remotely managed node in more detail in accordance with certain embodiments of the invention. Node 248 can be a node 210 of a co-location facility, or alternatively a separate device (e.g., a client 102 or server 112 of FIG. 1). Node 248 includes a monitor 250, referred to as the “BMonitor”, and a plurality of software components or engines 252, and is coupled to (or alternatively incorporates) a mass storage device 262. In the illustrated example, node 248 is a computing device having a processor(s) that supports multiple privilege levels (e.g., rings in an ×86 architecture processor). In the illustrated example, these privilege levels are referred to as rings, although alternate implementations using different processor architectures may use different nomenclature. The multiple rings provide a set of prioritized levels that software can execute at, often including 4 levels ( Rings 0, 1, 2, and 3). Ring 0 is typically referred to as the most privileged ring. Software processes executing in Ring 0 can typically access more features (e.g., instructions) than processes executing in less privileged Rings. Furthermore, a processor executing in a particular Ring cannot alter code or data in a higher priority ring. In the illustrated example, BMonitor 250 executes in Ring 0, while engines 252 execute in Ring 1 (or alternatively Rings 2 and/or 3). Thus, the code or data of BMonitor 250 (executing in Ring 0) cannot be altered directly by engines 252 (executing in Ring 1). Rather, any such alterations would have to be made by an engine 252 requesting BMonitor 250 to make the alteration (e.g., by sending a message to BMonitor 250, invoking a function of BMonitor 250, etc.). Implementing BMonitor 250 in Ring 0 protects BMonitor 250 from a rogue or malicious engine 252 that tries to bypass any restrictions imposed by BMonitor 250.
  • Alternatively, BMonitor 250 may be implemented in other manners that protect it from a rogue or malicious engine 252. For example, node 248 may include multiple processors—one (or more) processor(s) for executing engines 252, and another processor(s) to execute BMonitor 250. By allowing only BMonitor 250 to execute on a processor(s) separate from the processor(s) on which engines 252 are executing, BMonitor 250 can be effectively shielded from engines 252.
  • BMonitor 250 is the fundamental control module of node 248—it controls (and optionally includes) both the network interface card and the memory manager. By controlling the network interface card (which may be separate from BMonitor 250, or alternatively BMonitor 250 may be incorporated on the network interface card), BMonitor 250 can control data received by and sent by node 248. By controlling the memory manager, BMonitor 250 controls the allocation of memory to engines 252 executing in node 248 and thus can assist in preventing rogue or malicious engines from interfering with the operation of BMonitor 250.
  • Although various aspects of node 248 may be under control of BMonitor 250 (e.g., the network interface card), BMonitor 250 still makes at least part of such functionality available to engines 252 executing on the node 248. BMonitor 250 provides an interface (e.g., via controller 254 discussed in more detail below) via which engines 252 can request access to the functionality, such as to send data out to another node 248 within a co-location facility or on the Internet. These requests can take any of a variety of forms, such as sending messages, calling a function, etc.
  • BMonitor 250 includes controller 254, network interface 256, one or more filters 258, one or more keys 259, and a BMonitor Control Protocol (BMCP) module 260. Network interface 256 provides the interface between node 248 and the network (e.g., network 108 of FIG. 1). Filters 258 identify other nodes 248 in a co-location facility (and/or other sources or targets (e.g., coupled to Internet 108 of FIG. 1) that data can (or alternatively cannot) be sent to and/or received from. The nodes or other sources/targets can be identified in any of a wide variety of manners, such as by network address (e.g., Internet Protocol (IP) address), some other globally unique identifier, a locally unique identifier (e.g., a numbering scheme proprietary or local to co-location facility 104), etc.
  • Filters 258 can fully restrict access to a node (e.g., no data can be received from or sent to the node), or partially restrict access to a node. Partial access restriction can take different forms. For example, a node may be restricted so that data can be received from the node but not sent to the node (or vice versa). By way of another example, a node may be restricted so that only certain types of data (e.g., communications in accordance with certain protocols, such as HTTP) can be received from and/or sent to the node. Filtering based on particular types of data can be implemented in different manners, such as by communicating data in packets with header information that indicate the type of data included in the packet.
  • Filters 258 can be added by one or more management devices 110 of FIG. 1 or either of application operations management console 242 or cluster operations management console 240 of FIG. 3. In the illustrated example, filters added by cluster operations management console 240 (to establish cluster boundaries) restrict full access to nodes (e.g., any access to another node can be prevented) whereas filters added by application operations management console 242 (to establish sub-boundaries within a cluster) or management device 110 can restrict either full access to nodes or partial access.
  • Controller 254 also imposes some restrictions on what filters can be added to filters 258. In the multi-tiered management architecture illustrated in FIGS. 3 and 4, controller 254 allows cluster operations management console 240 to add any filters it desires (which will define the boundaries of the cluster). However, controller 254 restricts application operations management console 242 to adding only filters that are at least as restrictive as those added by console 240. If console 242 attempts to add a filter that is less restrictive than those added by console 240 (in which case the sub-boundary may extend beyond the cluster boundaries), controller 254 refuses to add the filter (or alternatively may modify the filter so that it is not less restrictive). By imposing such a restriction, controller 254 can ensure that the sub-boundaries established at the application operations management level do not extend beyond the cluster boundaries established at the cluster operations management level.
  • Controller 254, using one or more filters 258, operates to restrict data packets sent from node 248 and/or received by node 248. All data intended for an engine 252, or sent by an engine 252, to another node, is passed through network interface 256 and filters 258. Controller 254 applies the filters 258 to the data, comparing the target of the data (e.g., typically identified in a header portion of a packet including the data) to acceptable (and/or restricted) nodes (and/or network addresses) identified in filters 258. If filters 258 indicate that the target of the data is acceptable, then controller 254 allows the data to pass through to the target (either into node 248 or out from node 248). However, if filters 258 indicate that the target of the data is not acceptable, then controller 254 prevents the data from passing through to the target. Controller 254 may return an indication to the source of the data that the data cannot be passed to the target, or may simply ignore or discard the data.
  • The application of filters 258 to the data by controller 254 allows the boundary restrictions of a server cluster (FIG. 3) to be imposed. Filters 258 can be programmed (e.g., by application operations management console 242 of FIG. 3) with the node addresses of all the nodes within the server cluster (e.g., cluster 212). Controller 254 then prevents data received from any node not within the server cluster from being passed through to an engine 252, and similarly prevents any data being sent to a node other than one within the server cluster from being sent. Similarly, data received from Internet 108 (FIG. 1) can identify a target node 248 (e.g., by IP address), so that controller 254 of any node other than the target node will prevent the data from being passed through to an engine 252. Furthermore, as filters 258 can be readily modified by cluster operations management console 240, server cluster boundaries can be easily changed to accommodate changes in the server cluster (e.g., addition of nodes to and/or removal of nodes from the server cluster).
  • BMCP module 260 implements the Distributed Host Control Protocol (DHCP), allowing BMonitor 250 (and thus node 248) to obtain an IP address from a DHCP server (e.g., cluster operations management console 240 of FIG. 3). During an initialization process for node 248, BMCP module 260 requests an IP address from the DHCP server, which in turn provides the IP address to module 260. Additional information regarding DHCP is available from Microsoft Corporation of Redmond, Wash.
  • Software engines 252 include any of a wide variety of conventional software components. Examples of engines 252 include an operating system (e.g., Windows NT®), a load balancing server component (e.g., to balance the processing load of multiple nodes 248), a caching server component (e.g., to cache data and/or instructions from another node 248 or received via the Internet), a storage manager component (e.g., to manage storage of data from another node 248 or received via the Internet), etc. In one implementation, each of the engines 252 is a protocol-based engine, communicating with BMonitor 250 and other engines 252 via messages and/or function calls without requiring the engines 252 and BMonitor 250 to be written using the same programming language.
  • Controller 254, in conjunction with loader 264, is responsible for controlling the execution of engines 252. This control can take different forms, including beginning or initiating execution of an engine 252, terminating execution of an engine 252, re-loading an image of an engine 252 from a storage device, debugging execution of an engine 252, etc. Controller 254 receives instructions from application operations management console 242 of FIG. 3 or a management device(s) 110 of FIG. 1 regarding which of these control actions to take and when to take them. In the event that execution of an engine 252 is to be initiated (including re-starting an engine whose execution was recently terminated), controller 254 communicates with loader 264 to load an image of the engine 252 from a storage device (e.g., device 262, ROM, etc.) into the memory (e.g., RAM) of node 248. Loader 264 operates in a conventional manner to copy the image of the engine from the storage device into memory and initialize any necessary operating system parameters to allow execution of the engine 252. Thus, the control of engines 252 is actually managed by a remote device, not locally at the same location as the node 248 being managed.
  • Controller 254 also provides an interface via which application operations management console 242 of FIG. 3 or a management device(s) 110 of FIG. 1 can identify filters to add (and/or remove) from filter set 258.
  • Controller 254 also includes an interface via which cluster operations management console 240 of FIG. 3 can communicate commands to controller 254. Different types of hardware operation oriented commands can be communicated to controller 254 by cluster operations management console 240, such as re-booting the node, shutting down the node, placing the node in a low-power state (e.g., in a suspend or standby state), changing cluster boundaries, changing encryption keys (if any), etc.
  • Controller 254 further optionally provides encryption support for BMonitor 250, allowing data to be stored securely on mass storage device 262 (e.g., a magnetic disk, an optical disk, etc.) and secure communications to occur between node 248 and an operations management console (e.g., console 240 or 242 of FIG. 3) or other management device (e.g., management device 110 of FIG. 1).
  • Controller 254 maintains multiple encryption keys 259, which can include a variety of different keys such as symmetric keys (secret keys used in secret key cryptography), public/private key pairs (for public key cryptography), etc. to be used in encrypting and/or decrypting data.
  • BMonitor 250 makes use of public key cryptography to provide secure communications between node 248 and the management consoles (e.g., consoles 240 or 242 of FIG. 3) or other management devices (e.g., management device(s) 110 of FIG. 1). Public key cryptography is based on a key pair, including both a public key and a private key, and an encryption algorithm. The encryption algorithm can encrypt data based on the public key such that it cannot be decrypted efficiently without the private key. Thus, communications from the public-key holder can be encrypted using the public key, allowing only the private-key holder to decrypt the communications. Any of a variety of public key cryptography techniques may be used, such as the well-known RSA (Rivest, Shamir, and Adelman) encryption technique. For a basic introduction of cryptography, the reader is directed to a text written by Bruce Schneier and entitled “Applied Cryptography: Protocols, Algorithms, and Source Code in C,” published by John Wiley & Sons with copyright 1994 (or second edition with copyright 1996).
  • BMonitor 250 is initialized to include a public/private key pair for both the landlord and the tenant. These key pairs can be generated by BMonitor 250, or alternatively by some other component and stored within BMonitor 250 (with that other component being trusted to destroy its knowledge of the key pair). As used herein, U refers to a public key and R refers to a private key. The public/private key pair for the landlord is referred to as (UL, RL), and the public/private key pair for the tenant is referred to as (UT, RT). BMonitor 250 makes the public keys UL and UT available to the landlord, but keeps the private keys RL and RT secret. In the illustrated example, BMonitor 250 never divulges the private keys RL and RT, so both the landlord and the tenant can be assured that no entity other than the BMonitor 250 can decrypt information that they encrypt using their public keys (e.g., via cluster operations management console 240 and application operations management console 242 of FIG. 3, respectively).
  • Once the landlord has the public keys UL and UT, the landlord can assign node 248 to a particular tenant, giving that tenant the public key UT. Use of the public key UT allows the tenant to encrypt communications to BMonitor 250 that only BMonitor 250 can decrypt (using the private key RT). Although not required, a prudent initial step for the tenant is to request that BMonitor 250 generate a new public/private key pair (UT, RT). In response to such a request, controller 254 or a dedicated key generator (not shown) of BMonitor 250 generates a new public/private key pair in any of a variety of well-known manners, stores the new key pair as the tenant key pair, and returns the new public key UT to the tenant. By generating a new key pair, the tenant is assured that no other entity, including the landlord, is aware of the tenant public key UT. Additionally, the tenant may also have new key pairs generated at subsequent times.
  • Having a public/private key pair in which BMonitor 250 stores the private key and the tenant knows the public key allows information to be securely communicated from the tenant to BMonitor 250. In order to ensure that information can be securely communicated from BMonitor 250 to the tenant, an additional public/private key pair is generated by the tenant and the public key portion is communicated to BMonitor 250. Any communications from BMonitor 250 to the tenant can thus be encrypted using this public key portion, and can be decrypted only by the holder of the corresponding private key (that is, only by the tenant).
  • BMonitor 250 also maintains, as one of keys 259, a disk key which is generated based on one or more symmetric keys (symmetric keys refer to secret keys used in secret key cryptography). The disk key, also a symmetric key, is used by BMonitor 250 to store information in mass storage device 262. BMonitor 250 keeps the disk key secure, using it only to encrypt data node stored on mass storage device 262 and decrypt data node retrieved from mass storage device 262 (thus there is no need for any other entities, including any management device, to have knowledge of the disk key).
  • Use of the disk key ensures that data stored on mass storage device 262 can only be decrypted by the node that encrypted it, and not any other node or device. Thus, for example, if mass storage device 262 were to be removed and attempts made to read the data on device 262, such attempts would be unsuccessful. BMonitor 250 uses the disk key to encrypt data to be stored on mass storage device 262 regardless of the source of the data. For example, the data may come from a client device (e.g., client 102 of FIG. 1) used by a customer of the tenant, from a management device (e.g., a device 110 of FIG. 1 or a console 240 or 242 of FIG. 3), etc.
  • In one implementation, the disk key is generated by combining the storage keys corresponding to each management device. The storage keys can be combined in a variety of different manners, and in one implementation are combined by using one of the keys to encrypt the other key, with the resultant value being encrypted by another one of the keys, etc.
  • Additionally, BMonitor 250 operates as a trusted third party mediating interaction among multiple mutually distrustful management agents that share responsibility for managing node 248. For example, the landlord and tenant for node 248 do not typically filly trust one another. BMonitor 250 thus operates as a trusted third party, allowing the lessor and lessee of node 248 to trust that information made available to BMonitor 250 by a particular entity or agent is accessible only to that entity or agent, and no other (e.g., confidential information given by the lessor is not accessible to the lessee, and vice versa). BMonitor 250 uses a set of layered ownership domains (ODs) to assist in creating this trust. An ownership domain is the basic unit of authentication and rights in BMonitor 250, and each managing entity or agent (e.g., the lessor and the lessee) corresponds to a separate ownership domain (although each managing entity may have multiple management devices from which it can exercise its managerial responsibilities).
  • FIG. 6 is a block diagram illustrating an exemplary set of ownership domains in accordance with certain embodiments of the invention. Multiple (×) ownership domains 280, 282, and 284 are organized as an ownership domain stack 286. Each ownership domain 280-284 corresponds to a particular managerial level and one or more management devices (e.g., device(s) 110 of FIG. 1, consoles 240 and 242 of FIG. 3, etc.). The base or root ownership domain 280 corresponds to the actual owner of the node, such as the landlord discussed above. The next lowest ownership domain 282 corresponds to the entity that the owner of the hardware leases the hardware to (e.g., the tenant discussed above). A management device in a particular ownership domain can set up another ownership domain for another management device that is higher on ownership domain stack 286. For example, the entity that the node is leased to can set up another ownership domain for another entity (e.g., to set up a cluster of nodes implementing a database cluster).
  • When a new ownership domain is created, it is pushed on top of ownership domain stack 286. It remains the top-level ownership domain until either it creates another new ownership domain or its rights are revoked. An ownership domain's rights can be revoked by a device in any lower-level ownership domain on ownership domain stack 286, at which point the ownership domain is popped from (removed from) stack 286 along with any other higher-level ownership domains. For example, if the owner of node 248 (ownership domain 280) were to revoke the rights of ownership domain 282, then ownership domains 282 and 284 would be popped from ownership domain stack 286.
  • Each ownership domain has a corresponding set of rights. In the illustrated example, the top-level ownership domain has one set of rights that include: (1) the right to push new ownership domains on the ownership domain stack; (2) the right to access any system memory in the node; (3) the right to access any mass storage devices in or coupled to the node; (4) the right to modify (add, remove, or change) packet filters at the node; (5) the right to start execution of software engines on the node (e.g., engines 252 of FIG. 5); (6) the right to stop execution of software engines on the node, including resetting the node; (7) the right to debug software engines on the node; (8) the right to change its own authentication credentials (e.g., its public key or ID); (9) the right to modify its own storage key; (10) the right to subscribe to events engine events, machine events, and/or packet filter events (e.g., notify a management console or other device when one of these events occurs). Additionally, each of the lower-level ownership domains has another set of rights that include: (1) the right to pop an existing ownership domain(s); (2) the right to modify (add, remove, or change) packet filters at the node; (3) the right to change its own authentication credentials (e.g., public key or ID); and (4) the right to subscribe to machine events and/or packet filter events. Alternatively, some of these rights may not be included (e.g., depending on the situation, the right to debug software engines on the node may not be needed), or other rights may be included (e.g., the top-level node may include the right to pop itself off the ownership domain stack).
  • Ownership domains can be added to and removed from ownership domain stack 286 numerous times during operation. Which ownership domains are removed and/or added varies based on the activities being performed. By way of example, if the owner of node 248 (corresponding to root ownership domain 280) desires to perform some operation on node 248, all higher-level ownership domains 282-284 are revoked, the desired operation is performed (ownership domain 280 is now the top-level domain, so the expanded set of rights are available), and then new ownership domains can be created and added to ownership domain stack 286 (e.g., so that the management agent previously corresponding to the top-level ownership domain is returned to its previous position).
  • BMonitor 250 checks, for each request received from an entity corresponding to one of the ownership domains (e.g., a management console controlled by the entity), what rights the ownership domain has. If the ownership domain has the requisite rights for the request to be implemented, then BMonitor 250 carries out the request. However, if the ownership domain does not have the requisite set of rights, then the request is not carried out (e.g., an indication that the request cannot be carried out can be returned to the requester, or alternatively the request can simply be ignored).
  • In the illustrated example, each ownership domain includes an identifier (ID), a public key, and a storage key. The identifier serves as a unique identifier of the ownership domain, the public key is used to send secure communications to a management device corresponding to the ownership domain, and the storage key is used (at least in part) to encrypt information stored on mass storage devices. An additional private key may also be included for each ownership domain for the management device corresponding to the ownership domain to send secure communications to the BMonitor. When the root ownership domain 280 is created, it is initialized (e.g., by BMonitor 250) with its ID and public key. The root ownership domain 280 may also be initialized to include the storage key (and a private key), or alternatively it may be added later (e.g., generated by BMonitor 250, communicated to BMonitor 250 from a management console, etc.). Similarly, each time a new ownership domain is created, the ownership domain that creates the new ownership domain communicates an ID and public key to BMonitor 250 for the new ownership domain. A storage key (and a private key) may also be created for the new ownership domain when the new ownership domain is created, or alternatively at a later time.
  • BMonitor 250 authenticates a management device(s) corresponding to each of the ownership domains. BMonitor does not accept any commands from a management device until it is authenticated, and only reveals confidential information (e.g., encryption keys) for a particular ownership domain to a management device(s) that can authenticate itself as corresponding to that ownership domain. This authentication process can occur multiple times during operation of the node, allowing the management devices for one or more ownership domains to change over time. The authentication of management devices can occur in a variety of different manners. In one implementation, when a management device requests a connection to BMonitor 250 and asserts that it corresponds to a particular ownership domain, BMonitor 250 generates a token (e.g., a random number), encrypts the token with the public key of the ownership domain, and then sends the encrypted token to the requesting management device. Upon receipt of the encrypted token, the management device decrypts the token using its private key, and then returns the decrypted token to BMonitor 250. If the returned token matches the token that BMonitor 250 generated, then the authenticity of the management device is verified (because only the management device with the corresponding private key would be able to decrypt the token). An analogous process can be used for BMonitor 250 to authenticate itself to the management device.
  • Once authenticated, the management device can communicate requests to BMonitor 250 and have any of those requests carried out (assuming it has the rights to do so). Although not required, it is typically prudent for a management console, upon initially authenticating itself to BMonitor 250, to change its public key/private key pair.
  • When a new ownership domain is created, the management device that is creating the new ownership domain can optionally terminate any executing engines 252 and erase any system memory and mass storage devices. This provides an added level of security, on top of the encryption, to ensure that one management device does not have access to information stored on the hardware by another management device. Additionally, each time an ownership domain is popped from the stack, BMonitor 250 terminated any executing engines 252, erases the system memory, and also erases the storage key for that ownership domain. Thus, any information stored by that ownership domain cannot be accessed by the remaining ownership domains—the memory has been erased so there is no data in memory, and without the storage key information on the mass storage device cannot be decrypted. BMonitor 250 may alternatively erase the mass storage device too. However, by simply erasing the key and leaving the data encrypted, BMonitor 250 allows the data to be recovered if the popped ownership domain is re-created (and uses the same storage key).
  • FIG. 7 is a flow diagram illustrating the general operation of BMonitor 250 in accordance with certain embodiments of the invention. Initially, BMonitor 250 monitors the inputs it receives (block 290). These inputs can be from a variety of different sources, such as another node 248, a client computer via network connection 216 (FIG. 3), client operations management console 240, application operations management console 242, an engine 252, a management device 110 (FIG. 1), etc.
  • If the received request is a control request (e.g., from one of consoles 240 or 242 of FIG. 1, or a management device(s) 110 of FIG. 1), then a check is made (based on the top-level ownership domain) as to whether the requesting device has the necessary rights for the request (block 292). If the requesting device does not have the necessary rights, then BMonitor 250 returns to monitoring inputs (block 290) without implementing the request. However, if the requesting device has the necessary rights, then the request is implemented (block 294), and BMonitor 250 continues to monitor the inputs it receives (block 290). However, if the received request is a data request (e.g., inbound from another node 248 or a client computer via network connection 216, outbound from an engine 252, etc.), then BMonitor 250 either accepts or rejects the request (act 296), and continues to monitor the inputs it receives (block 290). Whether BMonitor 250 accepts a request is dependent on the filters 258 (FIG. 5), as discussed above.
  • FIG. 8 is a flowchart illustrating an exemplary process for handling outbound data requests in accordance with certain embodiments of the invention. The process of FIG. 8 is implemented by BMonitor 250 of FIG. 5, and may be performed in software. The process of FIG. 8 is discussed with additional reference to components in FIGS. 1, 3 and 5.
  • Initially, the outbound data request is received (act 300). Controller 254 compares the request to outbound request restrictions (act 302). This comparison is accomplished by accessing information corresponding to the data (e.g., information in a header of a packet that includes the data or information inherent in the data, such as the manner (e.g., which of multiple function calls is used) in which the data request was provided to BMonitor 250) to the outbound request restrictions maintained by filters 258. This comparison allows BMonitor 250 to determine whether it is permissible to pass the outbound data request to the target (act 304). For example, if filters 258 indicate which targets data cannot be sent to, then it is permissible to pass the outbound data request to the target only if the target identifier is not identified in filters 258.
  • If it is permissible to pass the outbound request to the target, then BMonitor 250 sends the request to the target (act 306). For example, BMonitor 250 can transmit the request to the appropriate target via transport medium 211 (and possibly network connection 216), or via another connection to network 108. However, if it is not permissible to pass the outbound request to the target, then BMonitor 250 rejects the request (act 308). BMonitor 250 may optionally transmit an indication to the source of the request that it was rejected, or alternatively may simply drop the request.
  • FIG. 9 is a flowchart illustrating an exemplary process for handling inbound data requests in accordance with certain embodiments of the invention. The process of FIG. 9 is implemented by BMonitor 250 of FIG. 5, and may be performed in software. The process of FIG. 9 is discussed with additional reference to components in FIG. 5.
  • Initially, the inbound data request is received (act 310). Controller 254 compares the request to inbound request restrictions (act 312). This comparison is accomplished by accessing information corresponding to the data to the inbound request restrictions maintained by filters 258. This comparison allows BMonitor 250 to determine whether it is permissible for any of software engines 252 to receive the data request (act 314). For example, if filters 258 indicate which sources data can be received from, then it is permissible for an engine 252 to receive the data request only if the source of the data is identified in filters 258.
  • If it is permissible to receive the inbound data request, then BMonitor 250 forwards the request to the targeted engine(s) 252 (act 316). However, if it is not permissible to receive the inbound data request from the source, then BMonitor 250 rejects the request (act 318). BMonitor 250 may optionally transmit an indication to the source of the request that it was rejected, or alternatively may simply drop the request.
  • Conclusion
  • Although the description above uses language that is specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the invention.

Claims (18)

1. One or more computer-readable media having stored thereon a computer program that, when executed by one or more processors of a node in a co-location facility, causes the one or more processors to perform acts including:
beginning and terminating execution of components on the node in response to received commands; and
restricting which other nodes in the co-location facility components that are executing on the node can receive data from and send data to.
2. One or more computer-readable media as recited in claim 1, wherein a plurality of management devices share management responsibility for the node, and wherein beginning and terminating execution of components on the node is restricted to only one of the plurality of management devices at a time.
3. One or more computer-readable media as recited in claim 1, wherein the restricting comprises:
checking whether it is permissible to forward received data to its intended target; and
forwarding the received data to its intended target only if it is permissible to do so.
4. One or more computer-readable media as recited in claim 3, wherein the intended target comprises another node in the co-location facility.
5. One or more computer-readable media as recited in claim 3, wherein the intended target comprises at least one of the components executing on the node.
6. One or more computer-readable media as recited in claim 1, wherein the beginning and terminating execution of components comprises beginning and termination execution of the components based on commands received from an operations console at a location remote from the co-location facility.
7. One or more computer-readable media as recited in claim 1, wherein one of the components comprises an operating system.
8. A method comprising:
receiving, at a node in a co-location facility, a first request from a first control console that is local to the co-location facility;
implementing the first request;
receiving, at the node, a second request from a second control console that is remote from the co-location facility; and
implementing the second request.
9. A method as recited in claim 8, wherein the first request comprises hardware operation oriented commands.
10. A method as recited in claim 8, wherein the second request comprises software application control oriented commands.
11. A method as recited in claim 8, wherein the first request corresponds to one of a first set of rights that are granted to the first control console, wherein the second request corresponds to one of a second set of rights that are granted to the second control console, and wherein the first set of rights is more restricted than the second set of rights.
12. One or more computer-readable memories containing a computer program that is executable by a processor to perform the method recited in claim 8.
13. One or more computer-readable media having stored thereon a computer program that, when executed by one or more processors of a node in a facility, causes the one or more processors to perform acts including:
establishing a boundary of a server cluster in the facility, wherein the server cluster includes the node; and
altering the boundary of the server cluster based on commands received from a console outside the server cluster.
14. One or more computer-readable media as recited in claim 13, wherein the establishing comprises including a filter that restricts access to another node that is in the facility but that is not in the server cluster.
15. One or more computer-readable media as recited in claim 13, wherein the establishing comprises generating a plurality of filters identifying only other nodes in the server cluster as being permissible to access.
16. One or more computer-readable media as recited in claim 13, wherein the computer program, when executed, further causes the one or more processors to perform acts including executing a software engine in response to a command received from the console.
17. One or more computer-readable, media as recited in claim 13, wherein the computer program, when executed, further causes the one or more processors to perform acts including terminating execution of a software engine in response to a command received from the console.
18. One or more computer-readable media as recited in claim 13, wherein the facility comprises a co-location facility.
US11/112,412 2000-10-24 2005-04-22 System and method for restricting data transfers and managing software components of distributed computers Abandoned US20050192971A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/112,412 US20050192971A1 (en) 2000-10-24 2005-04-22 System and method for restricting data transfers and managing software components of distributed computers

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/695,820 US6886038B1 (en) 2000-10-24 2000-10-24 System and method for restricting data transfers and managing software components of distributed computers
US11/112,412 US20050192971A1 (en) 2000-10-24 2005-04-22 System and method for restricting data transfers and managing software components of distributed computers

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/695,820 Continuation US6886038B1 (en) 2000-10-24 2000-10-24 System and method for restricting data transfers and managing software components of distributed computers

Publications (1)

Publication Number Publication Date
US20050192971A1 true US20050192971A1 (en) 2005-09-01

Family

ID=24794589

Family Applications (6)

Application Number Title Priority Date Filing Date
US09/695,820 Expired - Lifetime US6886038B1 (en) 2000-10-24 2000-10-24 System and method for restricting data transfers and managing software components of distributed computers
US11/007,001 Abandoned US20050102388A1 (en) 2000-10-24 2004-12-08 System and method for restricting data transfers and managing software components of distributed computers
US11/007,141 Expired - Fee Related US7016950B2 (en) 2000-10-24 2004-12-08 System and method for restricting data transfers and managing software components of distributed computers
US11/007,828 Expired - Fee Related US7043545B2 (en) 2000-10-24 2004-12-08 System and method for restricting data transfers and managing software components of distributed computers
US11/112,412 Abandoned US20050192971A1 (en) 2000-10-24 2005-04-22 System and method for restricting data transfers and managing software components of distributed computers
US12/839,328 Abandoned US20100287271A1 (en) 2000-10-24 2010-07-19 System and Method for Restricting Data Transfers and Managing Software Components of Distributed Computers

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US09/695,820 Expired - Lifetime US6886038B1 (en) 2000-10-24 2000-10-24 System and method for restricting data transfers and managing software components of distributed computers
US11/007,001 Abandoned US20050102388A1 (en) 2000-10-24 2004-12-08 System and method for restricting data transfers and managing software components of distributed computers
US11/007,141 Expired - Fee Related US7016950B2 (en) 2000-10-24 2004-12-08 System and method for restricting data transfers and managing software components of distributed computers
US11/007,828 Expired - Fee Related US7043545B2 (en) 2000-10-24 2004-12-08 System and method for restricting data transfers and managing software components of distributed computers

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/839,328 Abandoned US20100287271A1 (en) 2000-10-24 2010-07-19 System and Method for Restricting Data Transfers and Managing Software Components of Distributed Computers

Country Status (3)

Country Link
US (6) US6886038B1 (en)
EP (2) EP1202526A3 (en)
JP (3) JP4188584B2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040225922A1 (en) * 2003-05-09 2004-11-11 Sun Microsystems, Inc. System and method for request routing
US20050022202A1 (en) * 2003-07-09 2005-01-27 Sun Microsystems, Inc. Request failover mechanism for a load balancing system
US20070115981A1 (en) * 2005-10-14 2007-05-24 Dell Products L.P. System and method for filtering communications at a network interface controller
US20070294596A1 (en) * 2006-05-22 2007-12-20 Gissel Thomas R Inter-tier failure detection using central aggregation point
US20090058098A1 (en) * 2007-08-13 2009-03-05 Michael Patrick Flynn Backup generators
US7669235B2 (en) 2004-04-30 2010-02-23 Microsoft Corporation Secure domain join for computing devices
US7684964B2 (en) 2003-03-06 2010-03-23 Microsoft Corporation Model and system state synchronization
US7689676B2 (en) 2003-03-06 2010-03-30 Microsoft Corporation Model-based policy application
US7711121B2 (en) 2000-10-24 2010-05-04 Microsoft Corporation System and method for distributed management of shared computers
US7778422B2 (en) 2004-02-27 2010-08-17 Microsoft Corporation Security associations for devices
US7792931B2 (en) 2003-03-06 2010-09-07 Microsoft Corporation Model-based system provisioning
US7797147B2 (en) 2005-04-15 2010-09-14 Microsoft Corporation Model-based system monitoring
US7941309B2 (en) 2005-11-02 2011-05-10 Microsoft Corporation Modeling IT operations/policies
US8489728B2 (en) 2005-04-15 2013-07-16 Microsoft Corporation Model-based system monitoring
US8549513B2 (en) 2005-06-29 2013-10-01 Microsoft Corporation Model-based virtual system provisioning
US10122799B2 (en) 2016-03-29 2018-11-06 Experian Health, Inc. Remote system monitor
US11223680B2 (en) * 2014-12-16 2022-01-11 Telefonaktiebolaget Lm Ericsson (Publ) Computer servers for datacenter management

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7117239B1 (en) 2000-07-28 2006-10-03 Axeda Corporation Reporting the state of an apparatus to a remote computer
US8108543B2 (en) 2000-09-22 2012-01-31 Axeda Corporation Retrieving data from a server
US7185014B1 (en) 2000-09-22 2007-02-27 Axeda Corporation Retrieving data from a server
US6907395B1 (en) * 2000-10-24 2005-06-14 Microsoft Corporation System and method for designing a logical model of a distributed computer system and deploying physical resources according to the logical model
JP3859450B2 (en) * 2001-02-07 2006-12-20 富士通株式会社 Secret information management system and information terminal
US7171554B2 (en) * 2001-08-13 2007-01-30 Hewlett-Packard Company Method, computer program product and system for providing a switch user functionality in an information technological network
US7254601B2 (en) 2001-12-20 2007-08-07 Questra Corporation Method and apparatus for managing intelligent assets in a distributed environment
US7178149B2 (en) 2002-04-17 2007-02-13 Axeda Corporation XML scripting of soap commands
US20040093595A1 (en) * 2002-08-08 2004-05-13 Eric Bilange Software application framework for network-connected devices
US8544084B2 (en) 2002-08-19 2013-09-24 Blackberry Limited System and method for secure control of resources of wireless mobile communication devices
CA2411424A1 (en) * 2002-11-08 2004-05-08 Bell Canada Method and system for effective switching between set-top box services
US7356568B2 (en) 2002-12-12 2008-04-08 International Business Machines Corporation Method, processing unit and data processing system for microprocessor communication in a multi-processor system
US7359932B2 (en) 2002-12-12 2008-04-15 International Business Machines Corporation Method and data processing system for microprocessor communication in a cluster-based multi-processor system
US7360067B2 (en) 2002-12-12 2008-04-15 International Business Machines Corporation Method and data processing system for microprocessor communication in a cluster-based multi-processor wireless network
US7493417B2 (en) 2002-12-12 2009-02-17 International Business Machines Corporation Method and data processing system for microprocessor communication using a processor interconnect in a multi-processor system
JP4274311B2 (en) * 2002-12-25 2009-06-03 富士通株式会社 IDENTIFICATION INFORMATION CREATION METHOD, INFORMATION PROCESSING DEVICE, AND COMPUTER PROGRAM
US7966418B2 (en) 2003-02-21 2011-06-21 Axeda Corporation Establishing a virtual tunnel between two computer programs
US7072807B2 (en) * 2003-03-06 2006-07-04 Microsoft Corporation Architecture for distributed computing system and automated design, deployment, and management of distributed applications
US7469417B2 (en) * 2003-06-17 2008-12-23 Electronic Data Systems Corporation Infrastructure method and system for authenticated dynamic security domain boundary extension
US7606929B2 (en) * 2003-06-30 2009-10-20 Microsoft Corporation Network load balancing with connection manipulation
US7636917B2 (en) * 2003-06-30 2009-12-22 Microsoft Corporation Network load balancing with host status information
US7590736B2 (en) * 2003-06-30 2009-09-15 Microsoft Corporation Flexible network load balancing
US7400878B2 (en) 2004-02-26 2008-07-15 Research In Motion Limited Computing device with environment aware features
ATE500698T1 (en) 2004-04-30 2011-03-15 Research In Motion Ltd SYSTEM AND METHOD FOR FILTERING DATA TRANSFERS IN A MOBILE DEVICE
US8010644B1 (en) * 2005-02-23 2011-08-30 Sprint Communications Company L.P. Method and system for deploying a network monitoring service within a communication network
US7614082B2 (en) 2005-06-29 2009-11-03 Research In Motion Limited System and method for privilege management and revocation
US20070016393A1 (en) * 2005-06-29 2007-01-18 Microsoft Corporation Model-based propagation of attributes
US7760695B2 (en) * 2006-09-29 2010-07-20 Symbol Technologies, Inc. Methods and systems for centralized cluster management in wireless switch architecture
US8370479B2 (en) 2006-10-03 2013-02-05 Axeda Acquisition Corporation System and method for dynamically grouping devices based on present device conditions
US8433730B2 (en) 2006-10-31 2013-04-30 Ariba, Inc. Dynamic data access and storage
US8065397B2 (en) 2006-12-26 2011-11-22 Axeda Acquisition Corporation Managing configurations of distributed devices
US8478861B2 (en) 2007-07-06 2013-07-02 Axeda Acquisition Corp. Managing distributed devices with limited connectivity
JP2009086802A (en) * 2007-09-28 2009-04-23 Hitachi Ltd Mediation method and system for authentication
JP5176482B2 (en) * 2007-10-26 2013-04-03 富士通株式会社 Management program, management method, management apparatus, and communication system
CA2811839C (en) 2010-09-24 2017-09-05 Research In Motion Limited Method and apparatus for differentiated access control
US9147085B2 (en) 2010-09-24 2015-09-29 Blackberry Limited Method for establishing a plurality of modes of operation on a mobile device
CN103229183B (en) 2010-09-24 2016-05-11 黑莓有限公司 Be used for the method and apparatus of the access control of differentiation
US9225727B2 (en) 2010-11-15 2015-12-29 Blackberry Limited Data source based application sandboxing
US8774143B2 (en) * 2010-11-18 2014-07-08 General Electric Company System and method of communication using a smart meter
US8862871B2 (en) * 2011-04-15 2014-10-14 Architecture Technology, Inc. Network with protocol, privacy preserving source attribution and admission control and method
US9342254B2 (en) * 2011-06-04 2016-05-17 Microsoft Technology Licensing, Llc Sector-based write filtering with selective file and registry exclusions
US20130039266A1 (en) 2011-08-08 2013-02-14 Research In Motion Limited System and method to increase link adaptation performance with multi-level feedback
KR101326896B1 (en) * 2011-08-24 2013-11-11 주식회사 팬택 Terminal and method for providing risk of applications using the same
US9161226B2 (en) 2011-10-17 2015-10-13 Blackberry Limited Associating services to perimeters
US9497220B2 (en) 2011-10-17 2016-11-15 Blackberry Limited Dynamically generating perimeters
US9613219B2 (en) 2011-11-10 2017-04-04 Blackberry Limited Managing cross perimeter access
US8799227B2 (en) 2011-11-11 2014-08-05 Blackberry Limited Presenting metadata from multiple perimeters
US9262604B2 (en) 2012-02-01 2016-02-16 Blackberry Limited Method and system for locking an electronic device
US9698975B2 (en) 2012-02-15 2017-07-04 Blackberry Limited Key management on device for perimeters
EP2629478B1 (en) 2012-02-16 2018-05-16 BlackBerry Limited Method and apparatus for separation of connection data by perimeter type
CA2805960C (en) 2012-02-16 2016-07-26 Research In Motion Limited Method and apparatus for management of multiple grouped resources on device
EP2629570B1 (en) 2012-02-16 2015-11-25 BlackBerry Limited Method and apparatus for automatic vpn login and interface selection
CA2800504C (en) 2012-02-17 2019-09-10 Research In Motion Limited Designation of classes for certificates and keys
CA2799903C (en) 2012-02-17 2017-10-24 Research In Motion Limited Certificate management method based on connectivity and policy
US8561142B1 (en) * 2012-06-01 2013-10-15 Symantec Corporation Clustered device access control based on physical and temporal proximity to the user
US9369466B2 (en) 2012-06-21 2016-06-14 Blackberry Limited Managing use of network resources
US8972762B2 (en) 2012-07-11 2015-03-03 Blackberry Limited Computing devices and methods for resetting inactivity timers on computing devices
US8656016B1 (en) 2012-10-24 2014-02-18 Blackberry Limited Managing application execution and data access on a device
US9075955B2 (en) 2012-10-24 2015-07-07 Blackberry Limited Managing permission settings applied to applications
US9264413B2 (en) * 2012-12-06 2016-02-16 Qualcomm Incorporated Management of network devices utilizing an authorization token
WO2014117247A1 (en) 2013-01-29 2014-08-07 Blackberry Limited Managing application access to certificates and keys
US20140280698A1 (en) * 2013-03-13 2014-09-18 Qnx Software Systems Limited Processing a Link on a Device
US9787531B2 (en) * 2013-10-11 2017-10-10 International Business Machines Corporation Automatic notification of isolation
EP3668002B1 (en) * 2014-12-19 2022-09-14 Private Machines Inc. Systems and methods for using extended hardware security modules
US9961012B2 (en) * 2015-12-21 2018-05-01 Microsoft Technology Licensing, Llc Per-stage assignment of pipelines agents
US10587628B2 (en) * 2016-09-29 2020-03-10 Microsoft Technology Licensing, Llc Verifiable outsourced ledgers
WO2019148482A1 (en) * 2018-02-05 2019-08-08 Cisco Technology, Inc. Configurable storage server with multiple sockets
JP7020684B2 (en) * 2018-11-26 2022-02-16 株式会社ソフイア Pachinko machine

Citations (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5768271A (en) * 1996-04-12 1998-06-16 Alcatel Data Networks Inc. Virtual private network
US5822531A (en) * 1996-07-22 1998-10-13 International Business Machines Corporation Method and system for dynamically reconfiguring a cluster of computer systems
US5878220A (en) * 1994-11-21 1999-03-02 Oracle Corporation Method and apparatus for storing and transferring data on a network
US5895499A (en) * 1995-07-03 1999-04-20 Sun Microsystems, Inc. Cross-domain data transfer using deferred page remapping
US5930798A (en) * 1996-08-15 1999-07-27 Predicate Logic, Inc. Universal data measurement, analysis and control system
US5968126A (en) * 1997-04-02 1999-10-19 Switchsoft Systems, Inc. User-based binding of network stations to broadcast domains
US6035405A (en) * 1997-12-22 2000-03-07 Nortel Networks Corporation Secure virtual LANs
US6047325A (en) * 1997-10-24 2000-04-04 Jain; Lalit Network device for supporting construction of virtual local area networks on arbitrary local and wide area computer networks
US6075776A (en) * 1996-06-07 2000-06-13 Nippon Telegraph And Telephone Corporation VLAN control system and method
US6085238A (en) * 1996-04-23 2000-07-04 Matsushita Electric Works, Ltd. Virtual LAN system
US6108702A (en) * 1998-12-02 2000-08-22 Micromuse, Inc. Method and apparatus for determining accurate topology features of a network
US6147995A (en) * 1995-11-15 2000-11-14 Cabletron Systems, Inc. Method for establishing restricted broadcast groups in a switched network
US6151688A (en) * 1997-02-21 2000-11-21 Novell, Inc. Resource management in a clustered computer system
US6167052A (en) * 1998-04-27 2000-12-26 Vpnx.Com, Inc. Establishing connectivity in networks
US6178529B1 (en) * 1997-11-03 2001-01-23 Microsoft Corporation Method and system for resource monitoring of disparate resources in a server cluster
US6195355B1 (en) * 1997-09-26 2001-02-27 Sony Corporation Packet-Transmission control method and packet-transmission control apparatus
US6209099B1 (en) * 1996-12-18 2001-03-27 Ncr Corporation Secure data processing method and system
US6269079B1 (en) * 1997-11-12 2001-07-31 International Business Machines Corporation Systems, methods and computer program products for distributing connection information between ATM nodes
US6305015B1 (en) * 1997-07-02 2001-10-16 Bull S.A. Information processing system architecture
US6336171B1 (en) * 1998-12-23 2002-01-01 Ncr Corporation Resource protection in a cluster environment
US6370573B1 (en) * 1999-08-31 2002-04-09 Accenture Llp System, method and article of manufacture for managing an environment of a development architecture framework
US6393485B1 (en) * 1998-10-27 2002-05-21 International Business Machines Corporation Method and apparatus for managing clustered computer systems
US20020069369A1 (en) * 2000-07-05 2002-06-06 Tremain Geoffrey Donald Method and apparatus for providing computer services
US6449650B1 (en) * 1999-02-01 2002-09-10 Redback Networks Inc. Methods and apparatus for deploying quality of service policies on a data communication network
US6470025B1 (en) * 1998-06-05 2002-10-22 3Com Technologies System for providing fair access for VLANs to a shared transmission medium
US20030056063A1 (en) * 2001-09-17 2003-03-20 Hochmuth Roland M. System and method for providing secure access to network logical storage partitions
US6542504B1 (en) * 1999-05-28 2003-04-01 3Com Corporation Profile based method for packet header compression in a point to point link
US6546553B1 (en) * 1998-10-02 2003-04-08 Microsoft Corporation Service installation on a base function and provision of a pass function with a service-free base function semantic
US6549934B1 (en) * 1999-03-01 2003-04-15 Microsoft Corporation Method and system for remote access to computer devices via client managed server buffers exclusively allocated to the client
US6564261B1 (en) * 1999-05-10 2003-05-13 Telefonaktiebolaget Lm Ericsson (Publ) Distributed system to intelligently establish sessions between anonymous users over various networks
US6564252B1 (en) * 1999-03-11 2003-05-13 Microsoft Corporation Scalable storage system with unique client assignment to storage server partitions
US6570875B1 (en) * 1998-10-13 2003-05-27 Intel Corporation Automatic filtering and creation of virtual LANs among a plurality of switch ports
US6574195B2 (en) * 2000-04-19 2003-06-03 Caspian Networks, Inc. Micro-flow management
US6598173B1 (en) * 1997-05-13 2003-07-22 Micron Technology, Inc. Method of remote access and control of environmental conditions
US6606708B1 (en) * 1997-09-26 2003-08-12 Worldcom, Inc. Secure server architecture for Web based data management
US6609213B1 (en) * 2000-08-10 2003-08-19 Dell Products, L.P. Cluster-based system and method of recovery from server failures
US6654796B1 (en) * 1999-10-07 2003-11-25 Cisco Technology, Inc. System for managing cluster of network switches using IP address for commander switch and redirecting a managing request via forwarding an HTTP connection to an expansion switch
US6675308B1 (en) * 2000-05-09 2004-01-06 3Com Corporation Methods of determining whether a network interface card entry within the system registry pertains to physical hardware or to a virtual device
US6681262B1 (en) * 2002-05-06 2004-01-20 Infinicon Systems Network data flow optimization
US20040117476A1 (en) * 2002-12-17 2004-06-17 Doug Steele Method and system for performing load balancing across control planes in a data center
US6754716B1 (en) * 2000-02-11 2004-06-22 Ensim Corporation Restricting communication between network devices on a common network
US6760765B1 (en) * 1999-11-09 2004-07-06 Matsushita Electric Industrial Co., Ltd. Cluster server apparatus
US6772333B1 (en) * 1999-09-01 2004-08-03 Dickens Coal Llc Atomic session-start operation combining clear-text and encrypted sessions to provide id visibility to middleware such as load-balancers
US6813778B1 (en) * 1999-08-16 2004-11-02 General Instruments Corporation Method and system for downloading and managing the enablement of a list of code objects
US6820121B1 (en) * 2000-08-24 2004-11-16 International Business Machines Corporation Methods systems and computer program products for processing an event based on policy rules using hashing
US6856591B1 (en) * 2000-12-15 2005-02-15 Cisco Technology, Inc. Method and system for high reliability cluster management
US6862613B1 (en) * 2000-01-10 2005-03-01 Sun Microsystems, Inc. Method and apparatus for managing operations of clustered computer systems
US6868062B1 (en) * 2000-03-28 2005-03-15 Intel Corporation Managing data traffic on multiple ports
US6904458B1 (en) * 2000-04-26 2005-06-07 Microsoft Corporation System and method for remote management
US6928482B1 (en) * 2000-06-29 2005-08-09 Cisco Technology, Inc. Method and apparatus for scalable process flow load balancing of a multiplicity of parallel packet processors in a digital communication network
US20050193103A1 (en) * 2002-06-18 2005-09-01 John Drabik Method and apparatus for automatic configuration and management of a virtual private network
US6957186B1 (en) * 1999-05-27 2005-10-18 Accenture Llp System method and article of manufacture for building, managing, and supporting various components of a system
US6968551B2 (en) * 2001-06-11 2005-11-22 John Hediger System and user interface for generation and processing of software application installation instructions
US6968550B2 (en) * 1999-05-19 2005-11-22 International Business Machines Corporation Apparatus and method for synchronizing software between computers
US6976269B1 (en) * 2000-08-29 2005-12-13 Equinix, Inc. Internet co-location facility security system
US7027412B2 (en) * 2000-11-10 2006-04-11 Veritas Operating Corporation System for dynamic provisioning of secure, scalable, and extensible networked computer environments
US7043407B2 (en) * 1997-03-10 2006-05-09 Trilogy Development Group, Inc. Method and apparatus for configuring systems
US7054943B1 (en) * 2000-04-28 2006-05-30 International Business Machines Corporation Method and apparatus for dynamically adjusting resources assigned to plurality of customers, for meeting service level agreements (slas) with minimal resources, and allowing common pools of resources to be used across plural customers on a demand basis
US7093288B1 (en) * 2000-10-24 2006-08-15 Microsoft Corporation Using packet filters and network virtualization to restrict network communications
US7103185B1 (en) * 1999-12-22 2006-09-05 Cisco Technology, Inc. Method and apparatus for distributing and updating private keys of multicast group managers using directory replication
US7103874B2 (en) * 2003-10-23 2006-09-05 Microsoft Corporation Model-based management of computer systems and distributed applications
US7139999B2 (en) * 1999-08-31 2006-11-21 Accenture Llp Development architecture framework
US7155490B1 (en) * 2000-03-01 2006-12-26 Freewebs Corporation System and method for providing a web-based operating system
US7181731B2 (en) * 2000-09-01 2007-02-20 Op40, Inc. Method, system, and structure for distributing and executing software and data on different network and computer devices, platforms, and environments
US7197418B2 (en) * 2001-08-15 2007-03-27 National Instruments Corporation Online specification of a system which compares determined devices and installed devices
US20070192769A1 (en) * 2004-10-20 2007-08-16 Fujitsu Limited Program, method, and apparatus for managing applications
US7315801B1 (en) * 2000-01-14 2008-01-01 Secure Computing Corporation Network security modeling system and method
US7403901B1 (en) * 2000-04-13 2008-07-22 Accenture Llp Error and load summary reporting in a health care solution environment
US7464147B1 (en) * 1999-11-10 2008-12-09 International Business Machines Corporation Managing a cluster of networked resources and resource groups using rule - base constraints in a scalable clustering environment

Family Cites Families (146)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5031089A (en) 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems
JPH0488489A (en) 1990-08-01 1992-03-23 Internatl Business Mach Corp <Ibm> Character recognizing device and method using generalized half conversion
JPH04287290A (en) 1990-11-20 1992-10-12 Imra America Inc Hough transformation picture processor
EP0501610B1 (en) * 1991-02-25 1999-03-17 Hewlett-Packard Company Object oriented distributed computing system
US5766271A (en) * 1994-11-24 1998-06-16 Sanyo Electric Co., Ltd. Process for producing solid electrolyte capacitor
US5872928A (en) * 1995-02-24 1999-02-16 Cabletron Systems, Inc. Method and apparatus for defining and enforcing policies for configuration management in communications networks
US5724508A (en) * 1995-03-09 1998-03-03 Insoft, Inc. Apparatus for collaborative computing
US5678041A (en) * 1995-06-06 1997-10-14 At&T System and method for restricting user access rights on the internet based on rating information stored in a relational database
US5872914A (en) * 1995-08-31 1999-02-16 International Business Machines Corporation Method and apparatus for an account managed object class model in a distributed computing environment
US5793763A (en) 1995-11-03 1998-08-11 Cisco Technology, Inc. Security system for network address translation systems
US5828846A (en) * 1995-11-22 1998-10-27 Raptor Systems, Inc. Controlling passage of packets or messages via a virtual connection or flow
US5801970A (en) 1995-12-06 1998-09-01 Martin Marietta Corporation Model-based feature tracking system
US5898830A (en) * 1996-10-17 1999-04-27 Network Engineering Software Firewall providing enhanced network security and user transparency
US5748958A (en) * 1996-04-30 1998-05-05 International Business Machines Corporation System for utilizing batch requests to present membership changes to process groups
US6434598B1 (en) * 1996-07-01 2002-08-13 Sun Microsystems, Inc. Object-oriented system, method and article of manufacture for a client-server graphical user interface (#9) framework in an interprise computing framework system
US5948055A (en) 1996-08-29 1999-09-07 Hewlett-Packard Company Distributed internet monitoring system and method
US5832529A (en) * 1996-10-11 1998-11-03 Sun Microsystems, Inc. Methods, apparatus, and product for distributed garbage collection
US6061740A (en) * 1996-12-09 2000-05-09 Novell, Inc. Method and apparatus for heterogeneous network management
JPH10208056A (en) 1997-01-16 1998-08-07 Honda Motor Co Ltd Line detection method
US5826015A (en) 1997-02-20 1998-10-20 Digital Equipment Corporation Method and apparatus for secure remote programming of firmware and configurations of a computer over a network
US6065058A (en) * 1997-05-09 2000-05-16 International Business Machines Corp. Dynamic push filtering based on information exchanged among nodes in a proxy hierarchy
US6070243A (en) 1997-06-13 2000-05-30 Xylan Corporation Deterministic user authentication service for communication network
US6389464B1 (en) * 1997-06-27 2002-05-14 Cornet Technology, Inc. Device management system for managing standards-compliant and non-compliant network elements using standard management protocols and a universal site server which is configurable from remote locations via internet browser technology
US6108699A (en) * 1997-06-27 2000-08-22 Sun Microsystems, Inc. System and method for modifying membership in a clustered distributed computer system and updating system configuration
JP3829425B2 (en) * 1997-08-08 2006-10-04 ブラザー工業株式会社 Inkjet recording device
US5960371A (en) 1997-09-04 1999-09-28 Schlumberger Technology Corporation Method of determining dips and azimuths of fractures from borehole images
US6141749A (en) 1997-09-12 2000-10-31 Lucent Technologies Inc. Methods and apparatus for a computer network firewall with stateful packet filtering
US5991886A (en) * 1997-09-15 1999-11-23 Lucent Technologies Inc. Portable electronic device having a travel mode for use when demonstrating operability of the device to security personnel
US6065053A (en) * 1997-10-01 2000-05-16 Micron Electronics, Inc. System for resetting a server
EP0907145A3 (en) 1997-10-03 2003-03-26 Nippon Telegraph and Telephone Corporation Method and equipment for extracting image features from image sequence
US5999712A (en) * 1997-10-21 1999-12-07 Sun Microsystems, Inc. Determining cluster membership in a distributed computer system
US6192401B1 (en) * 1997-10-21 2001-02-20 Sun Microsystems, Inc. System and method for determining cluster membership in a heterogeneous distributed system
US5901958A (en) * 1997-12-01 1999-05-11 Andrews; Douglas S. Method of playing a royal card stud poker game at a casino gaming table
US6125447A (en) * 1997-12-11 2000-09-26 Sun Microsystems, Inc. Protection domains to provide security in a computer system
US6484261B1 (en) * 1998-02-17 2002-11-19 Cisco Technology, Inc. Graphical network security policy management
US6496187B1 (en) 1998-02-17 2002-12-17 Sun Microsystems, Inc. Graphics system configured to perform parallel sample to pixel calculation
EP1068693B1 (en) 1998-04-03 2011-12-21 Vertical Networks, Inc. System and method for transmitting voice and data using intelligent bridged tdm and packet buses
US6208345B1 (en) * 1998-04-15 2001-03-27 Adc Telecommunications, Inc. Visual data integration system and method
US6308174B1 (en) * 1998-05-05 2001-10-23 Nortel Networks Limited Method and apparatus for managing a communications network by storing management information about two or more configuration states of the network
US6311144B1 (en) * 1998-05-13 2001-10-30 Nabil A. Abu El Ata Method and apparatus for designing and analyzing information systems using multi-layer mathematical models
FR2779018B1 (en) 1998-05-22 2000-08-18 Activcard TERMINAL AND SYSTEM FOR IMPLEMENTING SECURE ELECTRONIC TRANSACTIONS
US6259448B1 (en) 1998-06-03 2001-07-10 International Business Machines Corporation Resource model configuration and deployment in a distributed computer network
US6311217B1 (en) * 1998-06-04 2001-10-30 Compaq Computer Corporation Method and apparatus for improved cluster administration
US6360265B1 (en) 1998-07-08 2002-03-19 Lucent Technologies Inc. Arrangement of delivering internet protocol datagrams for multimedia services to the same server
US6427163B1 (en) * 1998-07-10 2002-07-30 International Business Machines Corporation Highly scalable and highly available cluster system management scheme
US6466932B1 (en) 1998-08-14 2002-10-15 Microsoft Corporation System and method for implementing group policy
US6266707B1 (en) 1998-08-17 2001-07-24 International Business Machines Corporation System and method for IP network address translation and IP filtering with dynamic address resolution
US6717949B1 (en) 1998-08-31 2004-04-06 International Business Machines Corporation System and method for IP network address translation using selective masquerade
US6324571B1 (en) * 1998-09-21 2001-11-27 Microsoft Corporation Floating single master operation
US6728885B1 (en) 1998-10-09 2004-04-27 Networks Associates Technology, Inc. System and method for network access control using adaptive proxies
US6286052B1 (en) * 1998-12-04 2001-09-04 Cisco Technology, Inc. Method and apparatus for identifying network data traffic flows and for applying quality of service treatments to the flows
US6212559B1 (en) * 1998-10-28 2001-04-03 Trw Inc. Automated configuration of internet-like computer networks
US6691165B1 (en) * 1998-11-10 2004-02-10 Rainfinity, Inc. Distributed server cluster for controlling network traffic
US6353806B1 (en) * 1998-11-23 2002-03-05 Lucent Technologies Inc. System level hardware simulator and its automation
US6393456B1 (en) 1998-11-30 2002-05-21 Microsoft Corporation System, method, and computer program product for workflow processing using internet interoperable electronic messaging with mime multiple content type
US6393474B1 (en) * 1998-12-31 2002-05-21 3Com Corporation Dynamic policy management apparatus and method using active network devices
US6691168B1 (en) 1998-12-31 2004-02-10 Pmc-Sierra Method and apparatus for high-speed network rule processing
US6515969B1 (en) * 1999-03-01 2003-02-04 Cisco Technology, Inc. Virtual local area network membership registration protocol for multiple spanning tree network environments
US6760775B1 (en) * 1999-03-05 2004-07-06 At&T Corp. System, method and apparatus for network service load and reliability management
US6510509B1 (en) 1999-03-29 2003-01-21 Pmc-Sierra Us, Inc. Method and apparatus for high-speed network rule processing
US6470332B1 (en) 1999-05-19 2002-10-22 Sun Microsystems, Inc. System, method and computer program product for searching for, and retrieving, profile attributes based on other target profile attributes and associated profiles
US6631141B1 (en) * 1999-05-27 2003-10-07 Ibm Corporation Methods, systems and computer program products for selecting an aggregator interface
US6968371B1 (en) 1999-06-23 2005-11-22 Clearwire Corporation Design for scalable network management systems
US6466984B1 (en) 1999-07-02 2002-10-15 Cisco Technology, Inc. Method and apparatus for policy-based management of quality of service treatments of network data traffic flows by integrating policies with application programs
US6549516B1 (en) 1999-07-02 2003-04-15 Cisco Technology, Inc. Sending instructions from a service manager to forwarding agents on a need to know basis
US6584499B1 (en) * 1999-07-09 2003-06-24 Lsi Logic Corporation Methods and apparatus for performing mass operations on a plurality of managed devices on a network
US6480955B1 (en) * 1999-07-09 2002-11-12 Lsi Logic Corporation Methods and apparatus for committing configuration changes to managed devices prior to completion of the configuration change
US6466978B1 (en) * 1999-07-28 2002-10-15 Matsushita Electric Industrial Co., Ltd. Multimedia file systems using file managers located on clients for managing network attached storage devices
US6601233B1 (en) * 1999-07-30 2003-07-29 Accenture Llp Business components framework
US6609198B1 (en) * 1999-08-05 2003-08-19 Sun Microsystems, Inc. Log-on service providing credential level change without loss of session continuity
US6684335B1 (en) 1999-08-19 2004-01-27 Epstein, Iii Edwin A. Resistance cell architecture
US6597956B1 (en) * 1999-08-23 2003-07-22 Terraspring, Inc. Method and apparatus for controlling an extensible computing system
US6587876B1 (en) * 1999-08-24 2003-07-01 Hewlett-Packard Development Company Grouping targets of management policies
US7404175B2 (en) * 2000-10-10 2008-07-22 Bea Systems, Inc. Smart generator
US6487622B1 (en) * 1999-10-28 2002-11-26 Ncr Corporation Quorum arbitrator for a high availability system
US6609148B1 (en) * 1999-11-10 2003-08-19 Randy Salo Clients remote access to enterprise networks employing enterprise gateway servers in a centralized data center converting plurality of data requests for messaging and collaboration into a single request
US6615256B1 (en) * 1999-11-29 2003-09-02 Microsoft Corporation Quorum resource arbiter within a storage network
US6529953B1 (en) * 1999-12-17 2003-03-04 Reliable Network Solutions Scalable computer network resource monitoring and location system
US7069432B1 (en) * 2000-01-04 2006-06-27 Cisco Technology, Inc. System and method for providing security in a telecommunication network
US6769008B1 (en) * 2000-01-10 2004-07-27 Sun Microsystems, Inc. Method and apparatus for dynamically altering configurations of clustered computer systems
US6493715B1 (en) * 2000-01-12 2002-12-10 International Business Machines Corporation Delivery of configuration change in a group
JP3790655B2 (en) 2000-03-06 2006-06-28 富士通株式会社 Label switch network system
US6601101B1 (en) * 2000-03-15 2003-07-29 3Com Corporation Transparent access to network attached devices
US6364439B1 (en) * 2000-03-31 2002-04-02 Interland, Inc. Computer storage systems for computer facilities
US6636929B1 (en) * 2000-04-06 2003-10-21 Hewlett-Packard Development Company, L.P. USB virtual devices
US6718361B1 (en) * 2000-04-07 2004-04-06 Network Appliance Inc. Method and apparatus for reliable and scalable distribution of data files in distributed networks
US6748447B1 (en) * 2000-04-07 2004-06-08 Network Appliance, Inc. Method and apparatus for scalable distribution of information in a distributed network
EP1292892A4 (en) * 2000-04-14 2006-11-15 Goahead Software Inc A system and method for upgrading networked devices
US6801937B1 (en) * 2000-05-31 2004-10-05 International Business Machines Corporation Method, system and program products for defining nodes to a cluster
US7418489B2 (en) 2000-06-07 2008-08-26 Microsoft Corporation Method and apparatus for applying policies
US6718379B1 (en) * 2000-06-09 2004-04-06 Advanced Micro Devices, Inc. System and method for network management of local area networks having non-blocking network switches configured for switching data packets between subnetworks based on management policies
US7366755B1 (en) * 2000-07-28 2008-04-29 International Business Machines Corporation Method and apparatus for affinity of users to application servers
US20020143960A1 (en) * 2000-08-02 2002-10-03 Erez Goren Virtual network generation system and method
US7069204B1 (en) * 2000-09-28 2006-06-27 Cadence Design System, Inc. Method and system for performance level modeling and simulation of electronic systems having both hardware and software elements
US7047518B2 (en) * 2000-10-04 2006-05-16 Bea Systems, Inc. System for software application development and modeling
US20020082821A1 (en) * 2000-10-31 2002-06-27 Glenn Ferguson Data model for automated server configuration
US20040073443A1 (en) * 2000-11-10 2004-04-15 Gabrick John J. System for automating and managing an IP environment
US8255513B2 (en) * 2000-12-14 2012-08-28 Hewlett-Packard, Caribe B.V. Topology information system for a managed world
US20020075844A1 (en) * 2000-12-15 2002-06-20 Hagen W. Alexander Integrating public and private network resources for optimized broadband wireless access and method
US6769005B1 (en) * 2001-02-13 2004-07-27 Silicon Access Networks Method and apparatus for priority resolution
US7246351B2 (en) * 2001-02-20 2007-07-17 Jargon Software System and method for deploying and implementing software applications over a distributed network
US20020118642A1 (en) * 2001-02-27 2002-08-29 Lee Daniel Joseph Network topology for use with an open internet protocol services platform
US7069337B2 (en) * 2001-03-20 2006-06-27 Mci, Inc. Policy-based synchronization of per-class resources between routers in a data network
US7028228B1 (en) * 2001-03-28 2006-04-11 The Shoregroup, Inc. Method and apparatus for identifying problems in computer networks
US7073059B2 (en) 2001-06-08 2006-07-04 Hewlett-Packard Development Company, L.P. Secure machine platform that interfaces to operating systems and customized control programs
US6944606B2 (en) * 2001-06-29 2005-09-13 National Instruments Corporation Measurements expert system and method for generating high-performance measurements software drivers
US7058181B2 (en) * 2001-08-02 2006-06-06 Senforce Technologies, Inc. Wireless bridge for roaming in network environment
US20030041139A1 (en) 2001-08-14 2003-02-27 Smartpipes, Incorporated Event management for a remote network policy management system
US7159125B2 (en) 2001-08-14 2007-01-02 Endforce, Inc. Policy engine for modular generation of policy for a flat, per-device database
CA2357087C (en) * 2001-09-06 2009-07-21 Cognos Incorporated Deployment manager for organizing and deploying an application in a distributed computing environment
AU2002328726A1 (en) * 2001-09-28 2003-04-14 Codagen Technologies Corp. A system and method for managing architectural layers within a software model
US7769823B2 (en) * 2001-09-28 2010-08-03 F5 Networks, Inc. Method and system for distributing requests for content
US7140000B2 (en) * 2001-10-09 2006-11-21 Certusoft Knowledge oriented programming
US7200665B2 (en) * 2001-10-17 2007-04-03 Hewlett-Packard Development Company, L.P. Allowing requests of a session to be serviced by different servers in a multi-server data service system
US7188364B2 (en) * 2001-12-20 2007-03-06 Cranite Systems, Inc. Personal virtual bridged local area networks
US7506058B2 (en) 2001-12-28 2009-03-17 International Business Machines Corporation Method for transmitting information across firewalls
US7188335B1 (en) * 2001-12-28 2007-03-06 Trilogy Development Group, Inc. Product configuration using configuration patterns
US20030138105A1 (en) * 2002-01-18 2003-07-24 International Business Machines Corporation Storing keys in a cryptology device
US7568019B1 (en) * 2002-02-15 2009-07-28 Entrust, Inc. Enterprise management system for normalization, integration and correlation of business measurements with application and infrastructure measurements
DE60318919T2 (en) 2002-03-29 2009-01-29 Advics Co., Ltd., Kariya Vehicle control device with power steering
US7130881B2 (en) * 2002-05-01 2006-10-31 Sun Microsystems, Inc. Remote execution model for distributed application launch and control
US8611363B2 (en) 2002-05-06 2013-12-17 Adtran, Inc. Logical port system and method
US6748958B1 (en) * 2002-06-17 2004-06-15 Patrick Gwen Flosser apparatus with floss tightening mechanism
US6801528B2 (en) * 2002-07-03 2004-10-05 Ericsson Inc. System and method for dynamic simultaneous connection to multiple service providers
US7210143B2 (en) * 2002-07-17 2007-04-24 International Business Machines Corporation Deployment of applications in a multitier compute infrastructure
US20040078787A1 (en) * 2002-07-19 2004-04-22 Michael Borek System and method for troubleshooting, maintaining and repairing network devices
US7505872B2 (en) * 2002-09-11 2009-03-17 International Business Machines Corporation Methods and apparatus for impact analysis and problem determination
US20040054791A1 (en) 2002-09-17 2004-03-18 Krishnendu Chakraborty System and method for enforcing user policies on a web server
US7530101B2 (en) * 2003-02-21 2009-05-05 Telecom Italia S.P.A. Method and system for managing network access device using a smart card
US7406692B2 (en) * 2003-02-24 2008-07-29 Bea Systems, Inc. System and method for server load balancing and server affinity
US7689676B2 (en) * 2003-03-06 2010-03-30 Microsoft Corporation Model-based policy application
US7072807B2 (en) * 2003-03-06 2006-07-04 Microsoft Corporation Architecture for distributed computing system and automated design, deployment, and management of distributed applications
US20040220792A1 (en) * 2003-04-30 2004-11-04 Gallanis Peter Thomas Performance modeling for information systems
US7389411B2 (en) * 2003-08-29 2008-06-17 Sun Microsystems, Inc. Secure transfer of host identities
US7237267B2 (en) * 2003-10-16 2007-06-26 Cisco Technology, Inc. Policy-based network security management
US7765540B2 (en) * 2003-10-23 2010-07-27 Microsoft Corporation Use of attribution to describe management information
US7778888B2 (en) * 2003-12-11 2010-08-17 International Business Machines Corporation Method for dynamically and automatically setting up offerings for IT services
US20050181775A1 (en) * 2004-02-13 2005-08-18 Readyalert Systems, Llc Alert notification service
US7571082B2 (en) * 2004-06-22 2009-08-04 Wells Fargo Bank, N.A. Common component modeling
US8627149B2 (en) * 2004-08-30 2014-01-07 International Business Machines Corporation Techniques for health monitoring and control of application servers
US7506338B2 (en) * 2004-08-30 2009-03-17 International Business Machines Corporation Method and apparatus for simplifying the deployment and serviceability of commercial software environments
US7653903B2 (en) * 2005-03-25 2010-01-26 Sony Corporation Modular imaging download system
US7802144B2 (en) * 2005-04-15 2010-09-21 Microsoft Corporation Model-based system monitoring
US7797147B2 (en) * 2005-04-15 2010-09-14 Microsoft Corporation Model-based system monitoring
US7743373B2 (en) * 2005-05-06 2010-06-22 International Business Machines Corporation Method and apparatus for managing software catalog and providing configuration for installation
US7587453B2 (en) * 2006-01-05 2009-09-08 International Business Machines Corporation Method and system for determining application availability

Patent Citations (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5878220A (en) * 1994-11-21 1999-03-02 Oracle Corporation Method and apparatus for storing and transferring data on a network
US5895499A (en) * 1995-07-03 1999-04-20 Sun Microsystems, Inc. Cross-domain data transfer using deferred page remapping
US6147995A (en) * 1995-11-15 2000-11-14 Cabletron Systems, Inc. Method for establishing restricted broadcast groups in a switched network
US5768271A (en) * 1996-04-12 1998-06-16 Alcatel Data Networks Inc. Virtual private network
US6085238A (en) * 1996-04-23 2000-07-04 Matsushita Electric Works, Ltd. Virtual LAN system
US6075776A (en) * 1996-06-07 2000-06-13 Nippon Telegraph And Telephone Corporation VLAN control system and method
US5822531A (en) * 1996-07-22 1998-10-13 International Business Machines Corporation Method and system for dynamically reconfiguring a cluster of computer systems
US5930798A (en) * 1996-08-15 1999-07-27 Predicate Logic, Inc. Universal data measurement, analysis and control system
US6209099B1 (en) * 1996-12-18 2001-03-27 Ncr Corporation Secure data processing method and system
US6151688A (en) * 1997-02-21 2000-11-21 Novell, Inc. Resource management in a clustered computer system
US6353898B1 (en) * 1997-02-21 2002-03-05 Novell, Inc. Resource management in a clustered computer system
US6338112B1 (en) * 1997-02-21 2002-01-08 Novell, Inc. Resource management in a clustered computer system
US7043407B2 (en) * 1997-03-10 2006-05-09 Trilogy Development Group, Inc. Method and apparatus for configuring systems
US5968126A (en) * 1997-04-02 1999-10-19 Switchsoft Systems, Inc. User-based binding of network stations to broadcast domains
US6598173B1 (en) * 1997-05-13 2003-07-22 Micron Technology, Inc. Method of remote access and control of environmental conditions
US6305015B1 (en) * 1997-07-02 2001-10-16 Bull S.A. Information processing system architecture
US6195355B1 (en) * 1997-09-26 2001-02-27 Sony Corporation Packet-Transmission control method and packet-transmission control apparatus
US6606708B1 (en) * 1997-09-26 2003-08-12 Worldcom, Inc. Secure server architecture for Web based data management
US6047325A (en) * 1997-10-24 2000-04-04 Jain; Lalit Network device for supporting construction of virtual local area networks on arbitrary local and wide area computer networks
US6178529B1 (en) * 1997-11-03 2001-01-23 Microsoft Corporation Method and system for resource monitoring of disparate resources in a server cluster
US6269079B1 (en) * 1997-11-12 2001-07-31 International Business Machines Corporation Systems, methods and computer program products for distributing connection information between ATM nodes
US6035405A (en) * 1997-12-22 2000-03-07 Nortel Networks Corporation Secure virtual LANs
US6167052A (en) * 1998-04-27 2000-12-26 Vpnx.Com, Inc. Establishing connectivity in networks
US6470025B1 (en) * 1998-06-05 2002-10-22 3Com Technologies System for providing fair access for VLANs to a shared transmission medium
US6546553B1 (en) * 1998-10-02 2003-04-08 Microsoft Corporation Service installation on a base function and provision of a pass function with a service-free base function semantic
US6570875B1 (en) * 1998-10-13 2003-05-27 Intel Corporation Automatic filtering and creation of virtual LANs among a plurality of switch ports
US6393485B1 (en) * 1998-10-27 2002-05-21 International Business Machines Corporation Method and apparatus for managing clustered computer systems
US6108702A (en) * 1998-12-02 2000-08-22 Micromuse, Inc. Method and apparatus for determining accurate topology features of a network
US6336171B1 (en) * 1998-12-23 2002-01-01 Ncr Corporation Resource protection in a cluster environment
US6449650B1 (en) * 1999-02-01 2002-09-10 Redback Networks Inc. Methods and apparatus for deploying quality of service policies on a data communication network
US6549934B1 (en) * 1999-03-01 2003-04-15 Microsoft Corporation Method and system for remote access to computer devices via client managed server buffers exclusively allocated to the client
US6564252B1 (en) * 1999-03-11 2003-05-13 Microsoft Corporation Scalable storage system with unique client assignment to storage server partitions
US6564261B1 (en) * 1999-05-10 2003-05-13 Telefonaktiebolaget Lm Ericsson (Publ) Distributed system to intelligently establish sessions between anonymous users over various networks
US6968550B2 (en) * 1999-05-19 2005-11-22 International Business Machines Corporation Apparatus and method for synchronizing software between computers
US6957186B1 (en) * 1999-05-27 2005-10-18 Accenture Llp System method and article of manufacture for building, managing, and supporting various components of a system
US6542504B1 (en) * 1999-05-28 2003-04-01 3Com Corporation Profile based method for packet header compression in a point to point link
US6813778B1 (en) * 1999-08-16 2004-11-02 General Instruments Corporation Method and system for downloading and managing the enablement of a list of code objects
US7139999B2 (en) * 1999-08-31 2006-11-21 Accenture Llp Development architecture framework
US6370573B1 (en) * 1999-08-31 2002-04-09 Accenture Llp System, method and article of manufacture for managing an environment of a development architecture framework
US6772333B1 (en) * 1999-09-01 2004-08-03 Dickens Coal Llc Atomic session-start operation combining clear-text and encrypted sessions to provide id visibility to middleware such as load-balancers
US6654796B1 (en) * 1999-10-07 2003-11-25 Cisco Technology, Inc. System for managing cluster of network switches using IP address for commander switch and redirecting a managing request via forwarding an HTTP connection to an expansion switch
US6760765B1 (en) * 1999-11-09 2004-07-06 Matsushita Electric Industrial Co., Ltd. Cluster server apparatus
US7464147B1 (en) * 1999-11-10 2008-12-09 International Business Machines Corporation Managing a cluster of networked resources and resource groups using rule - base constraints in a scalable clustering environment
US7103185B1 (en) * 1999-12-22 2006-09-05 Cisco Technology, Inc. Method and apparatus for distributing and updating private keys of multicast group managers using directory replication
US6862613B1 (en) * 2000-01-10 2005-03-01 Sun Microsystems, Inc. Method and apparatus for managing operations of clustered computer systems
US7315801B1 (en) * 2000-01-14 2008-01-01 Secure Computing Corporation Network security modeling system and method
US6754716B1 (en) * 2000-02-11 2004-06-22 Ensim Corporation Restricting communication between network devices on a common network
US7155490B1 (en) * 2000-03-01 2006-12-26 Freewebs Corporation System and method for providing a web-based operating system
US6868062B1 (en) * 2000-03-28 2005-03-15 Intel Corporation Managing data traffic on multiple ports
US7403901B1 (en) * 2000-04-13 2008-07-22 Accenture Llp Error and load summary reporting in a health care solution environment
US6574195B2 (en) * 2000-04-19 2003-06-03 Caspian Networks, Inc. Micro-flow management
US6904458B1 (en) * 2000-04-26 2005-06-07 Microsoft Corporation System and method for remote management
US7054943B1 (en) * 2000-04-28 2006-05-30 International Business Machines Corporation Method and apparatus for dynamically adjusting resources assigned to plurality of customers, for meeting service level agreements (slas) with minimal resources, and allowing common pools of resources to be used across plural customers on a demand basis
US6675308B1 (en) * 2000-05-09 2004-01-06 3Com Corporation Methods of determining whether a network interface card entry within the system registry pertains to physical hardware or to a virtual device
US6928482B1 (en) * 2000-06-29 2005-08-09 Cisco Technology, Inc. Method and apparatus for scalable process flow load balancing of a multiplicity of parallel packet processors in a digital communication network
US20020069369A1 (en) * 2000-07-05 2002-06-06 Tremain Geoffrey Donald Method and apparatus for providing computer services
US6609213B1 (en) * 2000-08-10 2003-08-19 Dell Products, L.P. Cluster-based system and method of recovery from server failures
US6820121B1 (en) * 2000-08-24 2004-11-16 International Business Machines Corporation Methods systems and computer program products for processing an event based on policy rules using hashing
US6976269B1 (en) * 2000-08-29 2005-12-13 Equinix, Inc. Internet co-location facility security system
US7181731B2 (en) * 2000-09-01 2007-02-20 Op40, Inc. Method, system, and structure for distributing and executing software and data on different network and computer devices, platforms, and environments
US7093288B1 (en) * 2000-10-24 2006-08-15 Microsoft Corporation Using packet filters and network virtualization to restrict network communications
US7027412B2 (en) * 2000-11-10 2006-04-11 Veritas Operating Corporation System for dynamic provisioning of secure, scalable, and extensible networked computer environments
US6856591B1 (en) * 2000-12-15 2005-02-15 Cisco Technology, Inc. Method and system for high reliability cluster management
US6968551B2 (en) * 2001-06-11 2005-11-22 John Hediger System and user interface for generation and processing of software application installation instructions
US7197418B2 (en) * 2001-08-15 2007-03-27 National Instruments Corporation Online specification of a system which compares determined devices and installed devices
US20030056063A1 (en) * 2001-09-17 2003-03-20 Hochmuth Roland M. System and method for providing secure access to network logical storage partitions
US6681262B1 (en) * 2002-05-06 2004-01-20 Infinicon Systems Network data flow optimization
US20050193103A1 (en) * 2002-06-18 2005-09-01 John Drabik Method and apparatus for automatic configuration and management of a virtual private network
US20040117476A1 (en) * 2002-12-17 2004-06-17 Doug Steele Method and system for performing load balancing across control planes in a data center
US7103874B2 (en) * 2003-10-23 2006-09-05 Microsoft Corporation Model-based management of computer systems and distributed applications
US20070192769A1 (en) * 2004-10-20 2007-08-16 Fujitsu Limited Program, method, and apparatus for managing applications

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7711121B2 (en) 2000-10-24 2010-05-04 Microsoft Corporation System and method for distributed management of shared computers
US7739380B2 (en) 2000-10-24 2010-06-15 Microsoft Corporation System and method for distributed management of shared computers
US7886041B2 (en) 2003-03-06 2011-02-08 Microsoft Corporation Design time validation of systems
US7792931B2 (en) 2003-03-06 2010-09-07 Microsoft Corporation Model-based system provisioning
US7890951B2 (en) 2003-03-06 2011-02-15 Microsoft Corporation Model-based provisioning of test environments
US7890543B2 (en) 2003-03-06 2011-02-15 Microsoft Corporation Architecture for distributed computing system and automated design, deployment, and management of distributed applications
US8122106B2 (en) 2003-03-06 2012-02-21 Microsoft Corporation Integrating design, deployment, and management phases for systems
US7684964B2 (en) 2003-03-06 2010-03-23 Microsoft Corporation Model and system state synchronization
US7689676B2 (en) 2003-03-06 2010-03-30 Microsoft Corporation Model-based policy application
US20040225922A1 (en) * 2003-05-09 2004-11-11 Sun Microsystems, Inc. System and method for request routing
US7571354B2 (en) * 2003-05-09 2009-08-04 Sun Microsystems, Inc. System and method for request routing
US20050022202A1 (en) * 2003-07-09 2005-01-27 Sun Microsystems, Inc. Request failover mechanism for a load balancing system
US7778422B2 (en) 2004-02-27 2010-08-17 Microsoft Corporation Security associations for devices
US7669235B2 (en) 2004-04-30 2010-02-23 Microsoft Corporation Secure domain join for computing devices
US7797147B2 (en) 2005-04-15 2010-09-14 Microsoft Corporation Model-based system monitoring
US8489728B2 (en) 2005-04-15 2013-07-16 Microsoft Corporation Model-based system monitoring
US10540159B2 (en) 2005-06-29 2020-01-21 Microsoft Technology Licensing, Llc Model-based virtual system provisioning
US9811368B2 (en) 2005-06-29 2017-11-07 Microsoft Technology Licensing, Llc Model-based virtual system provisioning
US9317270B2 (en) 2005-06-29 2016-04-19 Microsoft Technology Licensing, Llc Model-based virtual system provisioning
US8549513B2 (en) 2005-06-29 2013-10-01 Microsoft Corporation Model-based virtual system provisioning
US20070115981A1 (en) * 2005-10-14 2007-05-24 Dell Products L.P. System and method for filtering communications at a network interface controller
US8149866B2 (en) 2005-10-14 2012-04-03 Dell Products L.P. System and method for filtering communications at a network interface controller
US7941309B2 (en) 2005-11-02 2011-05-10 Microsoft Corporation Modeling IT operations/policies
US20070294596A1 (en) * 2006-05-22 2007-12-20 Gissel Thomas R Inter-tier failure detection using central aggregation point
US20090058098A1 (en) * 2007-08-13 2009-03-05 Michael Patrick Flynn Backup generators
US11223680B2 (en) * 2014-12-16 2022-01-11 Telefonaktiebolaget Lm Ericsson (Publ) Computer servers for datacenter management
US10122799B2 (en) 2016-03-29 2018-11-06 Experian Health, Inc. Remote system monitor
US10506051B2 (en) 2016-03-29 2019-12-10 Experian Health, Inc. Remote system monitor

Also Published As

Publication number Publication date
EP1202526A3 (en) 2004-05-19
US6886038B1 (en) 2005-04-26
EP2237523A2 (en) 2010-10-06
JP2011040096A (en) 2011-02-24
EP1202526A2 (en) 2002-05-02
JP4627768B2 (en) 2011-02-09
US7016950B2 (en) 2006-03-21
US20050102403A1 (en) 2005-05-12
EP2237523A3 (en) 2011-01-19
US7043545B2 (en) 2006-05-09
US20100287271A1 (en) 2010-11-11
JP2002202952A (en) 2002-07-19
US20050102388A1 (en) 2005-05-12
JP4188584B2 (en) 2008-11-26
US20050102404A1 (en) 2005-05-12
JP2007287165A (en) 2007-11-01

Similar Documents

Publication Publication Date Title
US7016950B2 (en) System and method for restricting data transfers and managing software components of distributed computers
US7711121B2 (en) System and method for distributed management of shared computers

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0001

Effective date: 20141014