EP1312007A1 - Verfahren und system zur bereitstellung der verwaltung eines dynamischen host-dienstes - Google Patents

Verfahren und system zur bereitstellung der verwaltung eines dynamischen host-dienstes

Info

Publication number
EP1312007A1
EP1312007A1 EP01952274A EP01952274A EP1312007A1 EP 1312007 A1 EP1312007 A1 EP 1312007A1 EP 01952274 A EP01952274 A EP 01952274A EP 01952274 A EP01952274 A EP 01952274A EP 1312007 A1 EP1312007 A1 EP 1312007A1
Authority
EP
European Patent Office
Prior art keywords
server
servers
administrative group
customer account
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP01952274A
Other languages
English (en)
French (fr)
Other versions
EP1312007A4 (de
Inventor
Kitrick B. Neutility Corporation SHEETS
Philip S. Neutility Corporation SMITH
Stephen J. Neutility Corporation ENGEL
Yuefan Neutility Corporation DENG
Joseph Neutility Corporation GUISTOZZI
Alexander Neutility Corporation KOROBKA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Galactic Computing Corp
Original Assignee
Neutility Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/710,095 external-priority patent/US6816905B1/en
Application filed by Neutility Corp filed Critical Neutility Corp
Publication of EP1312007A1 publication Critical patent/EP1312007A1/de
Publication of EP1312007A4 publication Critical patent/EP1312007A4/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling

Definitions

  • the present invention relates generally to the field of data processing business practices. More specifically, the present invention relates to a method and system for providing dynamic management of hosted services across disparate customer accounts and/or geographically distinct sites.
  • ISPs Internet Service Providers
  • ASPs Application Service Providers
  • ISVs Independent Software Vendors
  • ESPs Enterprise Solution Providers
  • MSPs Managed Service Providers
  • HSPs Hosted Service Providers
  • HSPs provide users with access to hosted applications on the Internet in the same way that telephone companies provide customers with connections to their intended caller through the international telephone network.
  • the computer equipment that HSPs use to host the applications and services they provide is commonly referred to as a server, hi its simplest form, a server can be a personal computer that is connected to the Internet through a network interface and that runs specific software designed to service the requests made by customers or clients of that server.
  • HSPs For all of the various delivery models that can be used by HSPs to provide hosted services, most HSPs will use a collection of servers that are connected to an internal network in what is commonly referred to as a "server farm", with each server performing unique tasks or the group of servers sharing the load of multiple tasks, such as mail server, web server, access server, accounting and management server.
  • server farm In the context of hosting websites, for example, customers with smaller websites are often aggregated onto and supported by a single web server. Larger websites, however, are commonly hosted on dedicated web servers that provide services solely for that site.
  • ISP survival Guide Strategies For Running A Competitive ISP, (1999).
  • HSPs have preferred to utilize server farms consisting of large numbers of individual personal computer servers wired to a common Internet connection or bank of modems and sometimes accessing a common set of disk drives.
  • an HSP adds a new hosted service customer
  • one or more personal computer servers are manually added to the HSP server farm and loaded with the appropriate software and data (e.g., web content) for that customer, hi this way, the HSP deploys only that level of hardware required to support its current customer level. Equally as important, the HSP can charge its customers an upfront setup fee that covers a significant portion of the cost of this hardware. By utilizing this approach, the HSP does not have to spend money in advance for large computer systems with idle capacity that will not generate immediate revenue for the HSP.
  • the server farm solution also affords an easier solution to the problem of maintaining security and data integrity across different customers than if those customers were all being serviced from a single larger mainframe computer. If all of the servers for a customer are loaded only with the software for that customer and are connected only to the data for that customer, security of that customer's information is insured by physical isolation.
  • HSPs For HSPs, numerous software billing packages are available to account and charge for these metered services, such as XaCCT from rens.com and HSP Power from inovaware.com. Other software programs have been developed to aid in the management of HSP networks, such as IP Magic from lightspeedsystems.com, Internet Services Management from resonate.com and MAMBA from luminate.com. The management and operation of an HSP has also been the subject of articles and seminars, such as Hursti, Jani, "Management of the Access Network and Service Provisioning," Seminar in Intemetworking. April 19, 1999. An example of of a typical HSP offering various configurations of hardware, software, maintenance and support for providing commercial levels of Internet access and website hosting at a monthly rate can be found at rackspace.com.
  • the HSP will manually add or remove a server to or from that portion of the HSP server farm that is directly cabled to the data storage and network interconnect of that client's website, hi the case where services are to be added, the typical process would be some variation of the following: (a) an order to change service level is received from a hosted service customer, (b) the HSP obtains new server hardware to meet the requested change, (c) personnel for the HSP physically install the new server hardware at the site where the server farm is located, (d) cabling for the new server hardware is added to the data storage and network connections for that site, (e) software for the server hardware is loaded onto the server and personnel for the HSP go through a series of initialization steps to configure the software specifically to the requirements of this customer account, and (f) the newly installed and fully configured server joins the existing administrative group of servers providing hosted service
  • all of the servers in an administrative group are represented in a mapping table maintained by a gateway server.
  • the mapping table identifies different service groups for the administrative group, such as mail service group, database service group, access server group, etc.
  • the gateway server routes requests for the administrative group to the appropriate service group based on the mapping table.
  • a new server may be added to one of the service groups by loading the appropriate software component on that server, after which the gateway server will recognize the new server and add it to the mapping table and bring the new server up to speed with the rest of the servers in that service group using a transaction log maintained for each service group.
  • the patent describes a software routine executing on a dedicated administrative server that uses a load balancing scheme to modify the mapping table to insure that requests for that administrative group are more evenly balanced among the various service groups that make up the administrative group.
  • Numerous patents have described techniques for workload balancing among servers in a single cluster or administrative groups.
  • U.S. Patent No. 6,006,529 describes software clustering that includes security and heartbeat arrangement under control of a master server, where all of the cluster members are assigned a common IP address and load balancing is preformed within that cluster.
  • U.S. Patents Nos. 5,537,542, 5,948,065 and 5,974,462 describe various workload- balancing arrangements for a multi-system computer processing system having a shared data space.
  • U.S. Patent No. 6,097,882 describes a replicator system interposed between clients and servers to transparently redirect IP packets between the two based on server availability and workload.
  • Various techniques have also been used to coordinate the operation of multiple computers or servers in a single cluster.
  • U.S. Patent No. 6,014,669 describes cluster operation of multiple servers in a single cluster by using a lock-step distributed configuration file.
  • U.S. Patent No. 6,088,727 describes cluster control in a shared data space multi-computer environment. Other patents have described how a single image of the input/output space can be used to coordinate multiple computers.
  • U.S. Patent No. 6,067,545 describes a distributed file system with shared metadata management, replicated configuration database and domain load balancing, that allows for servers to fall into and out of a single domain under control of the configuration database.
  • a good example of this type of operation management system that is intended to be used by HSPs is the Tivoli Service Delivery Management platform that consists of a user administration module, a software distribution module, an inventory module, an enterprise console, a security module, an enterprise manager module that provides a customizable view of all of the components in a network once they are added to the network, and a workload scheduler that allows workload to be balanced among servers sharing a common data space. All of these modules operate using an over-the-network communication scheme involving agents on the various nodes in the network that collect and report status and incident information to the other modules.
  • the various modules of the Tivoli Service Delivery Management platform can take over and manage those components on a more automatic basis.
  • the process of physically adding hardware for a new node into the network remains essentially a manual process that is accomplished in the same manner as previously described.
  • U.S. Patent No. 5,615,329 describes a typical example of a redundant hardware arrangement that implements remote data shadowing using dedicated separate primary and secondary computer systems where the secondary computer system takes over for the primary computer system in the event of a failure of the primary computer system.
  • the problem with these types of mirroring or shadowing arrangements is that they can be expensive and wasteful, particularly where the secondary computer system is idled in a standby mode waiting for a failure of the primary computer system.
  • 5,696,895 describes one solution to this problem in which a series of servers each run their own tasks, but each is also assigned to act as a backup to one of the other servers in the event that server has a failure. This arrangement allows the tasks being performed by both servers to continue on the backup server, although performance will be degraded.
  • Other examples of this type of solution include the Epoch Point of Distribution (POD) server design and the USI Complex Web Service.
  • the hardware components used to provide these services are predefined computing pods that include load-balancing software, which can also compensate for the failure of a hardware component within an administrative group. Even with the use of such predefined computing pods, the physical preparation and installation of such pods into an administrative group can take up to a week to accomplish.
  • ISA Internet Shock Absorber
  • the ISA service distributes a customer's static Web content to one or more caching servers located at various Points of Presence (POPs) on the Cable & Wireless Internet backbone. Requests for this static Web content can be directed to be . caching servers and the various POP locations to offload this function from the servers in the administrative group providing hosted service for that customer.
  • POPs Points of Presence
  • the caching of static Web content is something that occurs naturally as part of the distribution of information over the Internet. Where a large number of users are requesting static information from a given the IP address, it is common to cache this information at multiple locations on the Internet.
  • the ISA service allows a customer to proactively initiate the caching of static Web content on the Internet. While this solution has the potential to improve performance for delivery of static Web content, this solution is not applicable to the numerous other types of hosted services that involve interactive or dynamic information content.
  • the present invention is a method and system for operating a hosted service provider for the Internet in such a way as to provide dynamic management of hosted services across disparate customer accounts and/or geographically distinct sites.
  • a plurality of individual servers are allocated to a common administrative group defined for that customer account.
  • Each administrative group is configured to access software and data unique to that customer account for providing hosted services to the Internet for that customer account.
  • the system automatically monitors the performance and health of the servers in each administrative group. At least one server from a first administrative group is automatically and dynamically reallocated to a second administrative group in response to the automatic monitoring.
  • the automatic and dynamic reallocation of servers is accomplished by setting initialization pointers for the reallocated servers to access software and data unique to the customer account for the second administrative group, and then reinitializing the reallocated servers such that they join the second administrative group when restarted.
  • the performance and health of the servers in each administrative group are monitored over a separate out-of-band communication channel dedicated to interconnecting the servers across administrative groups.
  • Each administrative group includes a local decision software program that communicates with a master decision software program that determines when and how to dynamically reallocate servers to different administrative groups in response to usage demands, available resources and service level agreements with each customer account.
  • a system for providing the dynamic management of hosted services for multiple customer accounts includes at least five servers operably connected to an intranet.
  • Each server includes host management circuitry providing a communication channel with at least one of the other servers that is separate from this intranet. At least four of the servers execute a local decision software program that monitors the server and communicates status information across the communication channel. At least two of the servers are allocated to a first administrative group for a first customer account and configured to access software and data unique to this first customer account, such that hosted services are provided via the Internet for this customer account. At least two of the other servers are allocated to a second administrative group for a second customer account and configured to access software and data unique to this second customer account, such that hosted services are provided via the Internet for this customer account.
  • At least one of the servers executes a master decision software program that collects status information from the other servers and dynamically reallocates at least one server from the first administrative group to the second administrative group in response to at least the status information.
  • the present invention is capable of dynamically reallocating servers across multiple disparate customer accounts to provide hosted services with a more economical and flexible server farm arrangement.
  • the ability of the present invention to support multiple administrative groups for multiple customers allows for an intelligent and dynamic allocation of server resources among different customer accounts.
  • Figure 1 is a simplified block diagram of a prior art arrangement of a server farm for a hosted service provider.
  • Figure 2 is a graphic representation of Internet traffic in relation to server capacity for a prior art server farm hosting multiple customer accounts.
  • Figure 3 is a simplified block diagram of the arrangement of a server farm in accordance with the present invention.
  • Figure 4 is a simplified block diagram similar to Figure 3 showing the dynamic reallocation of servers from a first customer account to a second customer account to address a hardware failure.
  • Figure 5 is a simplified block diagram similar to Figure 3 showing the dynamic reallocation of servers from a first customer account to a second customer account to address an increased usage demand.
  • FIG. 6 is a block diagram of a preferred embodiment of the components of a server farm in accordance with the present invention.
  • Figure 7 is an exploded perspective view of a preferred embodiment of the hardware for the server farm in accordance with the present invention.
  • Figure 8 is a block diagram showing the hierarchical relation of the various software layers utilized by the present invention for a given customer account.
  • Figure 9 is a block diagram of an embodiment of the present invention implemented across geographically disparate sites.
  • Figure 10 is a graphic representation of Internet traffic in relation to server capacity for the server farm of the present invention when hosting multiple customer accounts.
  • Figure 11 is a block diagram showing a preferred embodiment of the master decision software program of the present invention.
  • Figure 12 is a graphic representation of three different service level agreement arrangements for a given customer account.
  • Figure 13 is a graphic representation of Internet traffic in relation to server capacity for a multi-site embodiment of the present invention.
  • Figure 14 is a block diagram showing the master decision software program controlling the network switch and storage unit connections.
  • Figure 15 is a block diagram of the preferred embodiment of the local decision software program.
  • Figure 16 is a graphic representation of the workload measurements from the various measurement modules of the local decision software program under varying load conditions.
  • Figure 17 is a graphic representation of a decision surface generated by the local decision software program to request or remove a server from an administrative group.
  • FIG. 1 a simplified functional view of an existing server farm 20 for a hosted service provider is shown.
  • Such server farms are normally constructed using off-the-shelf hardware and software components statically configured to support the hosted service requirements of a given customer account.
  • the server farm 20 for the hosted server provider is supporting hosted services for four different customer accounts.
  • the server farm 20 is connected to the Internet 22 by network switches/routers 24.
  • the network switches 24 are in turn connected to internal network switches/routers 26 that form an intranet among the front-end/content servers 28 and back-end compute servers 30 for a given customer account. All front-end/content servers 28 and back-end/compute servers 30 are connected to disk systems 32 containing data and software unique to that customer account.
  • the disk systems 32 may be included within the server housing, or the disk systems 32 may be housed in physically separate units directly connected to each of the servers 28, 30 or attached to more than one server 28, 30 as a storage attached network (SAN) or network attached storage (NAS) configuration.
  • SAN storage attached network
  • NAS network attached storage
  • system resources e.g., servers, disks, network links
  • a relatively simple website has been designed for any given customer account such that under a projected peak load the customer account may require three front-end servers 28 to handle user requests and a quad processor back-end server 30 to handle database queries/updates generated by these requests.
  • hardware-based technology such as F5 Big-IP, Cisco Local Director, or Foundry Serverlron, or a software-based solution such as Windows Load Balance Service (WLBS) or equivalent will be used to distribute the user requests evenly across the front-end/content servers 28.
  • WLBS Windows Load Balance Service
  • the back-end database/compute server 30 will commonly be clustered to provide some level of fault tolerance.
  • the website for this customer account is an e- commerce site designed to handle a peak load of 5000 transactions per minute.
  • the websites for the remaining customer accounts in the server farm 20 have been designed to handle peak loads of 10,000, 15,000 and 5000 transactions per minute, respectively.
  • having to design and configure each customer account to handle an anticipated peak load likely results in significant wasted capacity within the overall server farm 20. Even though the server farm 20 handling multiple customer accounts may have excess aggregate capacity, this extra capacity cannot be used to respond to hardware failures or unexpected increases in peak load from one account to the next.
  • Resources configured for a particular customer account are dedicated to that account and to that account only.
  • Web traffic will be routed to the remaining front-end servers 28. If the customer account was busy before the hardware failure and Web traffic remains constant or increases after the failure, the remaining front-end servers 28 will quickly become overloaded by servicing their previous workload as well as the additional traffic redirected from the failed server, hi a best case scenario, the system management software for the server farm 20 would notice that a server had failed and send a message to a site manager (via pager and/or e-mail) indicating the server failure.
  • the site manager can physically remove the failed hardware component, install a spare hardware component that has hopefully been stockpiled for this purpose, recable the new hardware component, configure and install the appropriate software for that customer account, and allow the new hardware component to rejoin the remaining front-end servers 28. Ultimately, this process could be accomplished in less than an hour. If the message is not received in a timely manner, if the site manager is not located at the site where the server farm is located, or if there is no stockpiled spare hardware available to replace the failed unit, this process will take even longer, hi the meantime, response times for users accessing the customer account are degraded and the customer account becomes increasingly vulnerable to another hardware failure during this period.
  • the server farm 40 includes network switches 44 to establish interconnection between the server farm 40 and the Internet 22.
  • a population of servers 46 are managed under control of an engine group manager 48.
  • Each of the servers 46 is a stateless computing device that is programatically connected to the Internet via the network switches 44 and to a disk storage system 50.
  • the servers 46 are comiected to the disk storage system 50 via a Fibre Channel storage area network (SAN).
  • the servers 46 may be connected to the disk storage system 50 via a network attached storage (NAS) arrangement, a switchable crossbar arrangement or any similar interconnection technique.
  • NAS network attached storage
  • the engine group manager 48 is responsible for automatically allocating the stateless servers 46 among multiple customer accounts and then configuring those servers for the allocated account. This is done by allocating the servers for a given customer account to a common administrative group 52 defined for that customer account and configured to access software and data unique to that customer account.
  • the engine group manager 48 automatically monitors each administrative group and automatically and dynamically reallocates servers 46' from a first administrative group 52-a to a second administrative group 52-b in response to the automatic monitoring. This is accomplished by using the engine group manager 48 to set initialization pointers for the reallocated servers 46' from the first administrative group 52-a to access software and data unique to the customer account for the second administrative group 52-b, and then reinitializing the reallocated servers 46' such that reallocated servers 46' join the second administrative group 52-b.
  • the present invention can make a reallocated server 46' available to a new administrative group 52 in as little as a few minutes.
  • load-balancing software is more typically found in connection with front-end/content servers, whereas clustering software or a combination of clustering software and load-balancing software are more typically used in connection with back-end/compute servers.
  • load-balancing software will be used to refer to any of these possible combinations.
  • the reallocated servers 46' automatically join the second admimsfrative group because the software for the second administrative group 52-b includes load-balancing software that will automatically add or remove a server from that administrative group in response to the server being brought online (i.e. reset and powered on) or brought offline (i.e., reset and powered off).
  • load-balancing software is widely known and available today; however, existing load-balancing software is only capable of adding or removing servers from a single administrative group, hi this embodiment, the engine group manager 48 takes advantage of capabilities of currently available commercial load- balancing application software to allow for the dynamic reallocation servers 46' across different administrative groups 52.
  • agents or subroutines within the operating system software for the single administrative group could be responsible for integrating a reallocated server 46' into the second administrative group 52-b once the reallocated server 46' is brought online.
  • the engine group manager 48 could publish updates to a listing of available servers for each administrative group 52.
  • the engine group manager 48 will set pointers in each of the servers 46 for an administrative group 52 to an appropriate copy of the boot image software and configuration files, including operating system an application programs, that had been established for that administrative group 52.
  • a reallocated server 46' is rebooted, its pointers have been reset by the engine group manager 48 to point to the boot image software and configuration files for the second administrative group 52-b, instead of the boot image software and configuration files for the first administrative group 52-a.
  • each administrative group 52 represents the website or similar hosted services being provided by the server farm 40 for a unique customer account. Although different customer accounts could be paid for by the same business or by a related commercial entity, it will be understood that the data and software associated with a given customer account, and therefore with a given administrative group 52, will be unique to that customer account.
  • each administrative group 52 consists of unique software, including conventional operating system software, that does not extend outside servers 46 which have been assigned to the administrative group 52.
  • This distributed approach of the present invention allows for the use of simpler, conventional software applications and operating systems that can be installed on relatively inexpensive, individual servers, hi this way, the individual elements that make up an administrative group 52 can be comprised of relatively inexpensive commercially available hardware servers and standard software programs.
  • FIGS 6 and 7 show a preferred embodiment of the components and hardware for the server farm 40 in accordance with the present invention. Although the preferred embodiment of the present invention is described with respect to this hardware, it will be understood that the concept of the present invention is equally applicable to a server farm implemented using all conventional servers, including the currently available 1U or 2U packaged servers, if those servers are provided with the host management circuitry or its equivalent as will be described.
  • the hardware for the server farm 40 is a scalable engine 100 comprised of a large number of commercially available server boards 102 each arranged as an engine blade 132 in a power and space efficient cabinet 110.
  • the engine blades 132 are removably positioned in a front side 112 of the cabinet 110 in a vertical orientation.
  • a through plane 130 in the middle of the cabinet 110 provides common power and controls peripheral signals to all engine blades 132.
  • I/O signals for each engine blade 132 are routed through apertures in the through plane 130 to interface cards 134 positioned in the rear of the cabinet 110.
  • the I/O signals will be routed through an appropriate interface card 134 either to the Internet 22 via the network switch 44, or to the disk storage 50.
  • separate interface cards 134 are used for these different communication paths.
  • the scalable engine can accommodate different types of server boards 102 in the same cabinet 110 because of a common blade carrier structure 103.
  • Different types of commercially available motherboards 102 are mounted in the common blade carrier structure 103 that provides a uniform mechanical interface to the cabinet 110.
  • a specially designed PCI host board 104 that can plug into various types of motherboards 102 has connections routed through the through plane 130 for connecting to the interface cards 134. Redundant hot-swappable high-efficiency power supplies 144 are connected to the common power signals on the through plane 130.
  • the host board 104 includes management circuitry that distributes the power signals to the server board 102 for that engine blade 132 by emulating the ATX power management protocol.
  • Replaceable fan trays 140 are mounted below the engine blades 132 to cool the engine 100.
  • the cabinet 110 accommodates multiple rows of engine blades 132 in a chassis assembly 128 that includes a pair of sub-chassis 129 stacked on top of each other and positioned on top of a power frame 146 that holds the power supplies 144.
  • the cabinet 110 will also include rack mounted Ethernet networks switches 44 and 147 and storage switches 149 attached to disk drives 50 over a Fibre Channel network.
  • the server farm 40 can accommodate administrative groups 52 for any number of customers depending upon the total number of servers 46 in the server farm 40.
  • multiple cabinets 110 can be integrated together to scale the total number of servers 46 at a given location.
  • each engine blade 132 can be populated with the most recent processors for Intel, SPARC or PowerPC designs, each of which can support standard operating system environments such as Windows NT, Windows 2000, Linux or Solaris.
  • Each engine blade 132 can accommodate one or more server boards 102, and each server board may be either a single or multiprocessor design in accordance with the current ATX form factor or a new form factor that may be embraced by the industry in the future.
  • the communication channel 106 is implemented a Controller Area Network (CAN) bus that is separate from the communication paths for the network switch 44 or storage switches 149.
  • a second fault backup communication channel 106' could be provided to allow for fault tolerance and redundant communication paths for the group manager software 48.
  • CAN Controller Area Network
  • an engine blade 132 can include a local hard drive 107 that is accessed through the host board 104 such that information stored on that local hard drive 107 can be configured by the host board via the communication channel 106.
  • the host board 104 preferably includes power management circuitry 108 that enables the use of common power supplies for the cabinet 110 by emulating the ATX power management sequence to control the application of power to the server board 102.
  • a back channel Ethernet switch 147 also allows for communication of application and data information among the various server boards 102 within the server farm 40 without the need to route those communications out over the Internet 22.
  • each cabinet 110 can house up to 32 engine blades 132.
  • the networks switches 44 and 147 could comprise two 32 circuit switched Ethernet network routers from Foundry.
  • the networks switches 44 and 147 allow a reconfiguration of the connection between a server 46 and the networks switch 44 and 147 to be dynamically adjusted by changing the IP address for the server.
  • disk storage units 50 Two options are available. First, unique hardware and software can be inserted in the form of a crossbar switch 149 between the engine blades 132 and the disk storage units 50 which would abstract way the details of the underlying SAN storage hardware configuration. In this case, the link between the disk storage units 50 and each blade 132 would be communicated to the crossbar switch 149 through set of software APIs. Alternatively, commercially available Fibre Channel switches or RAID storage boxes could be used to build connectivity dynamically between the blades 132 and disk storage units 50.
  • a layer of software inside the engine group manager 48 performs the necessary configuration adjustments to the connections between the server blades 132 and networks switches 147 and disk storage units 50 are accomplished, hi another embodiment, a portion of the servers 46 could be permanently cabled to the network switches or disk storage units to decrease switch costs if, for example, the set of customer accounts supported by a given portion of the server farm 40 will always include a base number of servers 46 that cannot be reallocated. In this case, the base number of servers 46 for each administrative group 52 could be permanently cabled to the associated network switch 149 and disk storage unit 50 for that administrative group 52.
  • the server farm system 40 of the present invention can dynamically manage hosted services provided to multiple customer accounts.
  • each server 46 operably connected to an intranet 54.
  • the intranet is formed over the same network switches 44 that interconnect the servers 46 with the Internet 22 or over similar network switches such as network switches 147 that interconnect the servers 46 to each other.
  • Each server 46 has management circuitry on the host board 104 that provides a communication channel 106 with at least one of the other servers 46 that is separate from the intranet 54 created by the network switches 44 and/or 147.
  • At least four of the servers 46 are configured to execute a local decision software program 70 that monitors the server 46 and communicate status information across the communication channel 106. At least two of these servers 46 are allocated to a first administrative group 52-a for a first customer account and configured to access software and data unique to the first customer account to provide hosted services to the Internet for that customer account. At least another two of the servers 46 are allocated to a second administrative group 52-b for a second customer account and configured to access software and data unique to the second customer account to provide hosted services to the Internet for that customer account. At least one of the servers 46 executes a master decision software program 72 that collects status information from the local decision software programs 70 executing on the other servers 46.
  • a pair of servers 46 are slaved together using fault tolerant coordination software to form a fault tolerant/redundant processing platform for the master decision software program.
  • the master decision software program 72 dynamically reallocates at least one server 46' from the first administrative group 52-a to the second administrative group 52-b in response to at least the status information collected from the local decision software programs 70.
  • the servers 46 for both administrative groups 52 can be arranged in any configuration specified for a given customer account. As shown in Figure 3, three of the servers 46 for administrative group 52-b are configured as front-end servers with a single server 46 being configured as the back-end/compute server for this customer account.
  • the master decision software program 72 determines that is necessary to reallocate server 46' from its current usage as a server for the first administrative group 52-a to being used as a back-end/compute server for the second administrative group 52-b. The preferred embodiment for how this decision is arrived will be described in connection with the description of the operation of the local decision software program 72. Following the procedure just described, the master decision software program 72 directs the dynamic reallocation of reallocated server 46' to the second administrative group 52-b as shown in Figure 4.
  • a server farm 40 having thirty-two servers 46 could be set up to allocate six servers to each of four different customer accounts, with one server 46 executing the master decision software program 72 and a remaining pool 56 of seven servers 46 that are initially unassigned and can be allocated to any of the four administrative groups 52 defined for that server farm.
  • the preferred embodiment of the present invention uses this pool 56 as a buffer to further reduce the time required to bring a reallocated server 46' into an administrative group 52 by eliminating the need to first remove the reallocated server 46' from its existing administrative group 52.
  • the pool 56 can have both warm servers and cold servers.
  • a warm server would be a server 46 that has already been configured for a particular administrative group 52 and therefore it is not necessary to reboot that warm server to allow it to join the administrative group.
  • a cold server would be a server that is not configured to a particular administrative group 52 and therefore it will be necessary to reboot that cold server in order for it to join the administrative group.
  • reallocated servers 46' can be allocated to a new administrative group singly or as a group with more than one reallocated server 46' being simultaneously reallocated from a first administrative group 52-a to a second administrative group 52-b.
  • network switches 44, 147 and storage switches 149 are configured to accommodate such dynamic reallocation, it should also be understood that multiple servers 46 may be reallocated together as a group if it is necessary or desirable to reduce the number of dynamically configurable ports on the network 44, 147 and/or storage switches 149.
  • One of the significant advantages of the present invention is that the process of reconfiguring servers from one administrative group 52-a to a second administrative group 52-b will wipe clean all of the state associated with a particular customer account for the first administrative group from the reallocated server 46' before that server is brought into service as part of the second administrative group 52-b.
  • This provides a natural and very efficient security mechanism for precluding intentional or unintentional access to data between different customer accounts. Unless a server 46 or 46' is a member of a given administrative group 52-a, there is no way for that server to have access to the data or information for a different administrative group 52-b.
  • the present invention keeps the advantages of the simple physical separation between customer accounts that is found in conventional server farm arrangements, but does this while still allowing hardware to be automatically and dynamically reconfigured in the event of a need or opportunity to make better usage of that hardware.
  • the only point of access for authorization and control of this reconfiguration is via the master decision software program 72 over the out- of-band communication channel 106.
  • each server 46 is programmatically connected to the Internet 22 under control of the master decision software program 72.
  • the master decision software program 72 also switches the reallocated server 46' to be operably connected to a portion of the disk storage unit storing software and data unique to the customer account of the second administrative group.
  • an out-of-band communication channel 106 separate from the intranet 54 over the network switches 44 for communicating at least a portion of the status information utilized by the master decision software program 72 is preferably done for reasons of security, fault isolation and bandwidth isolation, hi a preferred embodiment, the communication channel 106 is a serial Controller Area Network (CAN) bus operating at a bandwidth of 1 Mb/s within the cabinet 106, with a secondary backbone also operating at a bandwidth 1 Mb/s between different cabinets 106.
  • CAN Serial Controller Area Network
  • IP Internet Protocol
  • Figure 8 shows a block diagram of the hierarchical relation of one embodiment of the various data and software layers utilized by the present invention for a given customer account.
  • Customer data and databases 60 form the base layer of this hierarchy.
  • a web data management software layer 62 may be incorporated to manage the customer data 60 across multiple instances of storage units that comprise the storage system 50.
  • Cluster and/or load- balancing aware application software 64 comprises the top layer of what is conventionally thought of as the software and data for the customer's website.
  • Load-balancing software 66 groups multiple servers 46 together as part of the common admimsfrative group 52. Multiple instances of conventional operating system software 68 are present, one for each server 46.
  • the load-balancing software 66 and operating system software 68 may be integrated as part of a common software package within a single administrative group 52.
  • the conventional operating system software 68 is the engine operating software 48 of the present invention that manages resources across multiple customer accounts 52-a and 52-b.
  • the servers 46 assigned to the first administrative group 52-a are located at a first site 80 and the servers 46 assigned to the second administrative group 52-b are located at a second site 82 geographically remote from the first site 80.
  • the system further includes an arrangement for automatically replicating at least data for the first administrative group 52-a to the second site 82.
  • a communication channel 84 separate from the network switches 44 is used to replicate data from the disk storage units 50-a at the first site 80 to the disk storage units 50-b at the second site 82.
  • the purpose of this arrangement is twofold. First, replication of the data provides redundancy and backup protection that allows for disaster recovery in the event of a disaster at the first site 80. Second, replication of the data at the second site 82 allows the present invention to include the servers 46 located in the second site 82 in the pool of available servers which the master decision software program 72 may use to satisfy increased demand for the hosted services of the first customer by dynamically reallocating these servers to the first administrative group 52-a.
  • the coordination between master decision software programs 72 at the first site 80 and second site 82 is preferably accomplished by the use of a global decision software routine 86 that communicates with the master decision software program 72 at each site.
  • This modular arrangement allows the master decision software programs 72 to focus on managing the server resources at a given site and extends the concept of having each site 80, 82 request additional off-site services from the global decision software routine 86 or offer to make available off-site services in much the same way that the local decision software programs 70 make requests for additional servers or make servers available for reallocation to the master decision software program 70 at a given site.
  • the multi-site embodiment of the present invention utilizes commercially available SAN or NAS storage networking software to implement a two-tiered data redundancy and replication hierarchy.
  • the working version 74 of the customer data for the first customer account customer is maintained on the disk storage unit 50 at the first site 80.
  • Redundancy data protection such as data mirroring, data shadowing or RAID data protection is used to establish a backup version 76 of the customer data for the first customer account at the first site 80.
  • the networking software utilizes the communication channel 84 to generate a second backup version 78 of the customer data for the first customer account located at the second site 82.
  • a communication channel 84 that is separate from the connection of the networks switches 44 to the Internet 22 preferably allows for redundant communication paths and minimizes the impact of the background communication activity necessary to generate the second backup version 78.
  • the backup version 78 of the customer data for the first customer account located at the second site 82 could be routed through the network switches 44 and the Internet 22.
  • additional backup versions of the customer data could be replicated at additional site locations to further expand the capability of the system to dynamically reallocate servers from customer accounts that are underutilizing these resources to customer accounts in need of these resources.
  • the ability of the present invention to dynamically reallocate servers from customer accounts that are underutilizing these resources to customer accounts in need of these resources allows for the resources of the server farm 40 to be used more efficiently in providing hosted services to multiple customer accounts.
  • the overall allocation of servers 46 to each customer account is accomplished such that a relatively constant marginal overcapacity bandwidth is maintained for each customer account.
  • the present invention allows for up-to-the-minute changes in server resources that are dynamically allocated on an as needed basis.
  • Figure 10 also shows the advantages of utilizing multiple geographically distinct sites for locating portions of the server farm 40.
  • the peak usages for customer accounts 94 and 95 are time shifted from those of the other customer accounts 91, 92 and 93 due to the difference in time zones between site location 80 and site location 82.
  • the present invention can take advantage of these time shifted differences in peak usages to allocate rolling server capacity to site locations during a time period of peak usage from other site locations which are experiencing a lull in activity.
  • one embodiment of the multi-site configuration of the present invention as shown in Figure 13 at least three separate three separate site locations 80, 82 and 84 are preferably situated geographically at least 24 divided by N+1 hours apart from each other, where N represents the number of distinct site locations in the multi-site configuration.
  • the site locations are preferably eight hours apart from each other.
  • the time difference realized by this geographic separation allows for the usage patterns of customer accounts located at all three sites to be aggregated and serviced by a combined number of servers that is significantly less than would otherwise be required if each of the servers at a given location were not able to utilize servers dynamically reallocated from one or more of the other locations.
  • the advantage of this can be seen when site location 80 is experiencing nighttime usage levels, servers from this site location 80 can be dynamically reallocated to site location 82 that is experiencing daytime usage levels.
  • site location 84 experiences evening usage levels and may or may not be suited to have servers reallocated from this location to another location or vice versa.
  • a site location is arranged so as to look to borrow capacity first from a site location that is at a later time zone (i.e., to the east of that site) and will look to make extra capacity available to site locations that are at an earlier time zone (i.e., to the west of that site).
  • Other preferences can also be established depending upon past usage and predicted patterns of use.
  • the master decision software program 72 includes a resource database 150, a service level agreement database 152, a master decision logic module 154 and a dispatch module 156.
  • the master decision logic module 154 has access to the resource database 150 and the service level agreement database 152 and compares the status information to information in the resource database 150 and the service level agreement database 152 to determine whether to dynamically reallocate servers from the first customer account to the second customer account.
  • the dispatch module 156 is operably linked to the master decision logic module 154 to dynamically reallocate servers when directed by the master decision logic module 154 by using the communication channel 106 to set initialization pointers for the reallocated servers 46' to access software and data unique to the customer account for the second administrative group 52-b and reinitializing the reallocated server 46' such that at least one server joins the second administrative group 52-b.
  • the dispatch module 156 includes a set of connectivity rules 160 and a set of personality modules 162 for each server 46.
  • the connectivity rules 160 providing instructions for connecting a particular server 46 to a given network switch 44 or data storage unit 50.
  • the personality module 162 describes the details of the particular software configuration of the server board 102 to be added to an administrative work group for a customer account.
  • Another way of looking at how the present invention can dynamically provide hosted service across disparate accounts is to view a portion of the servers 46 as being assigned to a pool of a plurality of virtual servers that may be selectively configured to access software and data for a particular administrative group 52.
  • the dispatch module 146 When the dispatch module 146 has determined a need to add a server 46 to a particular administrative group 52, it automatically allocates one of the servers from the pool of virtual servers to that administrative group. Conversely, if the dispatch module determines that an administrative group can relinquish one of its servers 46, that relinquished server would be added to the pool of virtual servers that are available for reallocation to a different administrative group.
  • the group manager software 48 operates to "manufacture" or create one or more virtual servers out of this pool of the plurality of virtual servers on a just-in- time or as-needed basis.
  • the pool of virtual servers can either be a warm pool or a cold pool, or any combination thereof.
  • the virtual server is manufactured or constructed to be utilized by the desired administrative group in accordance with the set of connectivity rules 160 and personality modules 162.
  • the master decision logic module 152 is operably connected to a management console 158 that can display information about the master decision software program and accept account maintenance and update information to processes into the various databases.
  • a billing software module 160 is integrated into the engine group manager 48 in order to keep track of the billing based on the allocation of servers to a given customer account.
  • a customer account is billed a higher rate at a higher rate for the hosted services when servers are dynamically reallocated to that customer account based on the customer's service level agreement.
  • Figure 12 shows a representation of three different service level agreement arrangements for a given customer account.
  • the service level agreements are made for providing hosted services for a given period of time, such as a month.
  • a first level shown at 170 the customer account is provided with the capacity to support hosted services for 640,000 simultaneous connections. If the customer account did not need a reallocation of servers to support capacity greater than the committed capacity for the first level 170, the customer would be charged to establish rate for that level of committed capacity, h a second level shown at 172, customer account can be dynamically expanded to support capacity of double the capacity at the first level 172.
  • the customer account would be charged a higher rate for the period of time that the additional usage was required.
  • the customer account could be charged a one-time fee for initiating the higher level of service represented by the second level 172.
  • charges for the second level 172 of service would be incurred at a rate that is some additional multiple of the rate charged for the first level 170.
  • the second level 172 represents a guaranteed expansion level available to the customer for the given period of time.
  • a third level 174 provides an optional extended additional level of service that may be able to be brought to bare to provide hosted services for the customer account.
  • the third level 174 provides up to a higher multiple times the level of service as the first level 170.
  • the host system makes use of the multi-site arrangement as previously described in order to bring in the required number of servers to meet this level of service.
  • the customer account is charged a second higher rate for the period of time, that the extended additional service is reallocated to this customer account.
  • charges for the third level 174 of service would be incurred at a rate that is an even larger multiple of the first level 170 for the given period of time that the extended additional third level 174 of service is provided for this customer account.
  • the customer account may be charged a one-time fee for initiating this third level 174 of service at any time during the given period.
  • the customer may alter the level of service contracted for the given customer account.
  • the service level agreement is increased by 50 percent from a first period to a second period in response to a higher anticipated peak usage for the given customer account.
  • the period for a service level agreement for a given customer account would be a monthly basis, with suggestions been presented to the customer for recommended changes to the service level agreement for the upcoming billing period.
  • this example is demonstrated in terms of simultaneous connections, it should be understood that the service level agreement for given customer account can be generated in terms of a variety of performance measurements, such as simultaneous connections, hits, amount of data transferred, number of transactions, connect time, resources utilized by different application software programs, the revenue generated, or any combination thereof.
  • the service level agreement may provide for different levels of commitment for different types of resources, such as front-end servers, back-end servers, network connections or disk storage units.
  • a series of measurement modules 180,181,182,183 and 184 each performed independent evaluations of the operation of the particular server on which the local decision software program 70 is executing. Outputs from these measurement modules are provided to an aggregator module 190 of the local decision software program 70.
  • a predictor module 192 generates expected response times and probabilities for various requests.
  • a fuzzy inference system 196 determines whether a request to add an engine blade 104 for the administrative group 52 will be made, or whether an offer to give up or remove an engine blade from the administrative group 52 will be made. The request to add or remove a blade is then communicated over communication channel 106 to the master decision software program 72.
  • the aggregator module 190 is executed on each server 46 within a given administrative group 52, and the predictor module 192 and fuzzy inference module 196 are executed on only a single server 46 within the given administrative group 52 with the outputs of the various measurement modules 180-184 been communicated to the designated server 46 across the communication channel 106.
  • the aggregator module 190, predictor module 192 and fuzzy inference module 196 may be executed on more than one server within a given administrative group for purposes of redundancy or distributed processing of the information necessary to generate the request add or remove a blade.
  • the aggregator module 190 accomplishes a balancing across the various measurement modules 180-184 in accordance with the formula:
  • the balanced request rate B k is then passed to the predictor module 192 and the frizzy inference module 196 of the local decision software program 70.
  • the window size for the measurement type k would be set to minimize any unnecessary intrusion by the measurement modules 180-184, while at the same time allowing for a timely and adequate response to increases in usage demand for the admimsfrative group 52.
  • Figure 16 shows a sample of the workload measurements from the various measurement modules 180-184 under varying load conditions. It can be seen that no single workload measurements provides a constantly predictable estimate of the expected response time and probability for that response time. As such, the fuzzy inference module 196 must consider three fundamental parameters: the predicted response times for various requests, the priority these requests, and probability of their occurrence. The fuzzy inference module 196 blends all three of these considerations to make a determination as to whether to request a blade to be added or remove from the administrative group 52. An example of a fuzzy inference rule would be: if (priority is urgent) and (probability is abundant) and (expected response time is too high) then (make request for additional blade).
  • the end results of the fuzzy inference module 196 is to generate a decision surface contouring the need to request an additional server over the grid of the expected response time vs. the probability of that response time for this administrative group 52.
  • a decision surface contouring the need to request an additional server over the grid of the expected response time vs. the probability of that response time for this administrative group 52 is shown in Figure 17.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Computer And Data Communications (AREA)
EP01952274A 2000-07-17 2001-06-28 Verfahren und system zur bereitstellung der verwaltung eines dynamischen host-dienstes Withdrawn EP1312007A4 (de)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US21860200P 2000-07-17 2000-07-17
US218602P 2000-07-17
US09/710,095 US6816905B1 (en) 2000-11-10 2000-11-10 Method and system for providing dynamic hosted service management across disparate accounts/sites
US710095 2000-11-10
PCT/US2001/020571 WO2002007037A1 (en) 2000-07-17 2001-06-28 Method and system for providing dynamic hosted service management

Publications (2)

Publication Number Publication Date
EP1312007A1 true EP1312007A1 (de) 2003-05-21
EP1312007A4 EP1312007A4 (de) 2008-01-02

Family

ID=26913078

Family Applications (1)

Application Number Title Priority Date Filing Date
EP01952274A Withdrawn EP1312007A4 (de) 2000-07-17 2001-06-28 Verfahren und system zur bereitstellung der verwaltung eines dynamischen host-dienstes

Country Status (7)

Country Link
EP (1) EP1312007A4 (de)
JP (1) JP2004519749A (de)
KR (1) KR100840960B1 (de)
CN (1) CN1285055C (de)
AU (1) AU2001273047A1 (de)
CA (1) CA2415770C (de)
WO (1) WO2002007037A1 (de)

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9603582D0 (en) 1996-02-20 1996-04-17 Hewlett Packard Co Method of accessing service resource items that are for use in a telecommunications system
US6938256B2 (en) 2000-01-18 2005-08-30 Galactic Computing Corporation System for balance distribution of requests across multiple servers using dynamic metrics
US6816905B1 (en) 2000-11-10 2004-11-09 Galactic Computing Corporation Bvi/Bc Method and system for providing dynamic hosted service management across disparate accounts/sites
US8538843B2 (en) 2000-07-17 2013-09-17 Galactic Computing Corporation Bvi/Bc Method and system for operating an E-commerce service provider
US7765299B2 (en) 2002-09-16 2010-07-27 Hewlett-Packard Development Company, L.P. Dynamic adaptive server provisioning for blade architectures
WO2004038527A2 (en) 2002-10-22 2004-05-06 Isys Technologies Systems and methods for providing a dynamically modular processing unit
EP1557075A4 (de) 2002-10-22 2010-01-13 Sullivan Jason Nicht-peripheres verarbeitungssteuermodul mit verbesserten wärmeableiteigenschaften
US7242574B2 (en) 2002-10-22 2007-07-10 Sullivan Jason A Robust customizable computer processing system
WO2005017783A2 (en) * 2003-08-14 2005-02-24 Oracle International Corporation Hierarchical management of the dynamic allocation of resourses in a multi-node system
US7552171B2 (en) 2003-08-14 2009-06-23 Oracle International Corporation Incremental run-time session balancing in a multi-node system
US7437459B2 (en) 2003-08-14 2008-10-14 Oracle International Corporation Calculation of service performance grades in a multi-node environment that hosts the services
US7516221B2 (en) 2003-08-14 2009-04-07 Oracle International Corporation Hierarchical management of the dynamic allocation of resources in a multi-node system
US7873684B2 (en) 2003-08-14 2011-01-18 Oracle International Corporation Automatic and dynamic provisioning of databases
US7441033B2 (en) 2003-08-14 2008-10-21 Oracle International Corporation On demand node and server instance allocation and de-allocation
US8365193B2 (en) 2003-08-14 2013-01-29 Oracle International Corporation Recoverable asynchronous message driven processing in a multi-node system
US7937493B2 (en) 2003-08-14 2011-05-03 Oracle International Corporation Connection pool use of runtime load balancing service performance advisories
US7437460B2 (en) 2003-08-14 2008-10-14 Oracle International Corporation Service placement for enforcing performance and availability levels in a multi-node system
US7664847B2 (en) 2003-08-14 2010-02-16 Oracle International Corporation Managing workload by service
US7953860B2 (en) 2003-08-14 2011-05-31 Oracle International Corporation Fast reorganization of connections in response to an event in a clustered computing system
CN100547583C (zh) 2003-08-14 2009-10-07 甲骨文国际公司 数据库的自动和动态提供的方法
US20060064400A1 (en) 2004-09-21 2006-03-23 Oracle International Corporation, A California Corporation Methods, systems and software for identifying and managing database work
US8554806B2 (en) 2004-05-14 2013-10-08 Oracle International Corporation Cross platform transportable tablespaces
JP2006085209A (ja) * 2004-09-14 2006-03-30 Hitachi Ltd 計算機システムのデプロイメント方式
EP1811376A4 (de) 2004-10-18 2007-12-26 Fujitsu Ltd Programm, verfahren und einrichtung zur operationsverwaltung
WO2006043308A1 (ja) 2004-10-18 2006-04-27 Fujitsu Limited 運用管理プログラム、運用管理方法および運用管理装置
EP3079061A1 (de) 2004-10-18 2016-10-12 Fujitsu Limited Betriebsverwaltungsprogramm, betriebsverwaltungsverfahren und betriebsverwaltungsvorrichtung
US9176772B2 (en) 2005-02-11 2015-11-03 Oracle International Corporation Suspending and resuming of sessions
US7526409B2 (en) 2005-10-07 2009-04-28 Oracle International Corporation Automatic performance statistical comparison between two periods
DE102006033863A1 (de) 2006-07-21 2008-01-24 Siemens Ag Verschaltungsschnittstelle für flexibles Online/Offline-Deployment einer n-schichtigen Softwareapplikation
US8909599B2 (en) 2006-11-16 2014-12-09 Oracle International Corporation Efficient migration of binary XML across databases
US7990724B2 (en) 2006-12-19 2011-08-02 Juhasz Paul R Mobile motherboard
US8095970B2 (en) * 2007-02-16 2012-01-10 Microsoft Corporation Dynamically associating attribute values with objects
JP5056504B2 (ja) 2008-03-13 2012-10-24 富士通株式会社 制御装置、情報処理システム、情報処理システムの制御方法および情報処理システムの制御プログラム
US8238538B2 (en) 2009-05-28 2012-08-07 Comcast Cable Communications, Llc Stateful home phone service
US9165086B2 (en) 2010-01-20 2015-10-20 Oracle International Corporation Hybrid binary XML storage model for efficient XML processing
US8595267B2 (en) * 2011-06-27 2013-11-26 Amazon Technologies, Inc. System and method for implementing a scalable data storage service
TWI437426B (zh) 2011-07-08 2014-05-11 Quanta Comp Inc 伺服器機櫃系統
US9733983B2 (en) * 2011-09-27 2017-08-15 Oracle International Corporation System and method for surge protection and rate acceleration in a traffic director environment
US20130308266A1 (en) * 2011-11-10 2013-11-21 Jason A. Sullivan Providing and dynamically mounting and housing processing control units
US10063450B2 (en) * 2013-07-26 2018-08-28 Opentv, Inc. Measuring response trends in a digital television network
GB2517195A (en) 2013-08-15 2015-02-18 Ibm Computer system productivity monitoring
US10764158B2 (en) 2013-12-04 2020-09-01 International Business Machines Corporation Dynamic system level agreement provisioning
US10057337B2 (en) * 2016-08-19 2018-08-21 AvaSure, LLC Video load balancing system for a peer-to-peer server network
US10474653B2 (en) 2016-09-30 2019-11-12 Oracle International Corporation Flexible in-memory column store placement
FR3071630B1 (fr) * 2017-09-25 2021-02-19 Schneider Electric Ind Sas Procede de gestion de modules logiciels embarques pour un calculateur electronique d'un appareil electrique de coupure
US12007941B2 (en) 2017-09-29 2024-06-11 Oracle International Corporation Session state tracking
US11936739B2 (en) 2019-09-12 2024-03-19 Oracle International Corporation Automated reset of session state

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0694837A1 (de) * 1994-07-25 1996-01-31 International Business Machines Corporation Dynamischer Arbeitsbelastungsausgleich
US5938732A (en) * 1996-12-09 1999-08-17 Sun Microsystems, Inc. Load balancing and failover of network services
EP0942363A2 (de) * 1998-03-11 1999-09-15 International Business Machines Corporation Verfahren und Vorrichtung zum Steuern der Serveranzahl in einem Mehrsystemcluster
WO2000004458A1 (en) * 1998-07-14 2000-01-27 Massachusetts Institute Of Technology Global document hosting system utilizing embedded content distributed ghost servers

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10154373A (ja) * 1996-09-27 1998-06-09 Sony Corp データデコードシステムおよびデータデコード方法、伝送装置および方法、並びに、受信装置および方法
KR100230281B1 (ko) * 1997-04-14 1999-11-15 윤종용 프로그램 번호를 전송 및 수신하는 멀티미디어 시스템과 프로그램 번호 전송 및 수신방법
JP3656874B2 (ja) 1997-07-04 2005-06-08 ソニー株式会社 電子機器制御システムおよび方法、再生装置、並びに出力装置
KR100304644B1 (ko) 1998-06-19 2001-11-02 윤종용 네트워크를통한정보전송장치및방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0694837A1 (de) * 1994-07-25 1996-01-31 International Business Machines Corporation Dynamischer Arbeitsbelastungsausgleich
US5938732A (en) * 1996-12-09 1999-08-17 Sun Microsystems, Inc. Load balancing and failover of network services
EP0942363A2 (de) * 1998-03-11 1999-09-15 International Business Machines Corporation Verfahren und Vorrichtung zum Steuern der Serveranzahl in einem Mehrsystemcluster
WO2000004458A1 (en) * 1998-07-14 2000-01-27 Massachusetts Institute Of Technology Global document hosting system utilizing embedded content distributed ghost servers

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO0207037A1 *

Also Published As

Publication number Publication date
CN1285055C (zh) 2006-11-15
WO2002007037A1 (en) 2002-01-24
EP1312007A4 (de) 2008-01-02
KR100840960B1 (ko) 2008-06-24
JP2004519749A (ja) 2004-07-02
CA2415770C (en) 2010-04-27
CA2415770A1 (en) 2002-01-24
AU2001273047A1 (en) 2002-01-30
KR20030019592A (ko) 2003-03-06
WO2002007037A9 (en) 2004-03-04
CN1441933A (zh) 2003-09-10

Similar Documents

Publication Publication Date Title
US6816905B1 (en) Method and system for providing dynamic hosted service management across disparate accounts/sites
CA2415770C (en) Method and system for providing dynamic hosted service management
US8538843B2 (en) Method and system for operating an E-commerce service provider
US7844513B2 (en) Method and system for operating a commissioned e-commerce service prover
US20050080891A1 (en) Maintenance unit architecture for a scalable internet engine
US6597956B1 (en) Method and apparatus for controlling an extensible computing system
CN101118521B (zh) 跨越多个逻辑分区分布虚拟输入/输出操作的系统和方法
US7370013B1 (en) Approach for determining an amount to bill a customer for the use of resources
US7475274B2 (en) Fault tolerance and recovery in a high-performance computing (HPC) system
Moreno-Vozmediano et al. Orchestrating the deployment of high availability services on multi-zone and multi-cloud scenarios
US20040088414A1 (en) Reallocation of computing resources
US20030126265A1 (en) Request queue management
US9483258B1 (en) Multi-site provisioning of resources to software offerings using infrastructure slices
EP4029197B1 (de) Verwendung von netzwerkanalytik zur bereitstellung von diensten
JP2004508616A (ja) 拡張可能コンピューティングシステムの制御方法および装置
CN100421382C (zh) 高扩展性互联网超级服务器的维护单元结构及方法
US7558858B1 (en) High availability infrastructure with active-active designs
Castets et al. IBM TotalStorage Enterprise Storage Server Model 800
EP2098956A1 (de) Computersystem und Betriebsverfahren zur Verwaltung eines Rechnerpools
Youn et al. The approaches for high available and fault-tolerant cluster systems
Miljković Geographically dispersed cluster of web application servers on virtualization platforms
Miljković Review of cluster computing for high available business web applications
Kamboj et al. LOAD BALANCING IN CLOUD ENVIRONMENT: A REVIEW
CN1397902A (zh) 佣金式电子商务服务提供商运营方法及系统

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20030214

AK Designated contracting states

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Extension state: AL LT LV MK RO SI

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: GALACTIC COMPUTING CORPORATION

RIN1 Information on inventor provided before grant (corrected)

Inventor name: KOROBKA, ALEXANDER,GAL. CORP.,SHELL ELEC. CO.LTD

Inventor name: GUISTOZZI, JOSEPH,GAL. CORP.,SHELL ELEC. CO. LTD

Inventor name: DENG, YUEFAN,GAL. CORP.,SHELL ELECTRIC CO. LTD

Inventor name: ENGEL, STEPHEN J.,GAL. CORP.,SHELL ELEC. CO. LTD

Inventor name: SMITH, PHILIP, S.,GAL. CORP.,SHELL ELEC. CO. LTD

Inventor name: SHEETS, KITRICK, B.,GAL. CORP.,SHELL EL. CO. LTD

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: GALACTIC COMPUTING CORPORATION (BVI/IBC)

A4 Supplementary search report drawn up and despatched

Effective date: 20071203

17Q First examination report despatched

Effective date: 20080401

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20151013