US20050198335A1 - Distributed load balancing for single entry-point systems - Google Patents

Distributed load balancing for single entry-point systems Download PDF

Info

Publication number
US20050198335A1
US20050198335A1 US11/101,777 US10177705A US2005198335A1 US 20050198335 A1 US20050198335 A1 US 20050198335A1 US 10177705 A US10177705 A US 10177705A US 2005198335 A1 US2005198335 A1 US 2005198335A1
Authority
US
United States
Prior art keywords
node
intake
game
service
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/101,777
Other versions
US7395335B2 (en
Inventor
Justin Brown
John Smith
Craig Link
Hoon Im
Charles Barry
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US09/778,223 priority Critical patent/US7155515B1/en
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US11/101,777 priority patent/US7395335B2/en
Publication of US20050198335A1 publication Critical patent/US20050198335A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARRY, CHARLES H., BROWN, JUSTIN D., LINK, CRAIG A., SMITH, JOHN W., IM, HOON
Publication of US7395335B2 publication Critical patent/US7395335B2/en
Application granted granted Critical
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Application status is Active legal-status Critical
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • H04L67/1004Server selection in load balancing
    • H04L67/1008Server selection in load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • H04L67/1004Server selection in load balancing
    • H04L67/1014Server selection in load balancing based on the content of a request

Abstract

A method and system for distributing work load in a cluster of at least two service resources. Depending upon the configuration, a service resource may be an individual process, such as a single instance of a computer game, or a node on which multiple processes are executing, such as a Server. Initial connection requests from new clients are directed to a single entry-point service resource in the cluster, called an intake. A separate intake is designated for each type of service provided by the cluster. The clients are processed in a group at the service resource currently designated as the intake to which clients initially connected, for the duration of the session. Based upon its loading, the current intake service resource determines that another service resource in the cluster should become a new intake for subsequent connection requests received from new clients. Selection of another service resource to become the new intake is based on the current work load of each resource in the cluster. All resources in the cluster are periodically informed of the resource for each service being provided that was last designated as the intake, and of the current load on each resource in the cluster. Subsequently, new clients requesting a service are directed to the newly designated intake for that service and processed on that resource for the duration of the session by those clients.

Description

    FIELD OF THE INVENTION
  • The present invention generally relates to a method and system for balancing resource utilization in a cluster of resources, and more specifically, to assigning related service requests to a specific resource to efficiently process the service requests.
  • BACKGROUND OF THE INVENTION
  • The term “load balancing” as used in connection with a computer network is a method of dividing work between two or more computational resources so that more work is completed in less time. In general, all clients of a service or services performed by such resources are served more quickly when the computational load is balanced among multiple resources. Typically, a cluster of resources is formed when balancing a computational load. For example, companies whose Web sites receive a great deal of traffic usually use clusters of Web server computers for load balancing.
  • Load balancing among a cluster of resource nodes is typically done by distributing service requests and service processing throughout the cluster of resources, without any regard for grouping. The goal is to share the processing task among the available resource nodes. Doing so minimizes turn-around time and maximizes resource utilization. In most cases, the particular resource that accepts and processes a service request is irrelevant to a requesting client or to the resources that carry out a service request. For example, it is generally irrelevant which Web server, in a cluster of Web servers, processes a request by a client for a current stock quote.
  • There are several important considerations in implementing load-balanced systems, including routing of tasks, fault tolerance, node priority, and load distribution. At a system level, controlling the overall load-balancing function involves tasks such as: (1) determining how client requests should be communicated throughout a cluster of resources (i.e., routing); (2) determining the status of resources within a cluster; (3) determining how a load will be handled if one of the resource nodes that was handling that load fails (i.e., fault tolerance); and (4) determining how the cluster will be reconfigured to share the processing load when the number of available resource nodes changes.
  • At a performance level, it is important to determine when and how a load will be distributed within a cluster. This decision is typically defined in accord with a load distribution algorithm. Typical algorithms of this type include: (1) round robin, (2) weighted round robin, (3) least connections, and (4) statistical mapping. In some load-balancing systems, a centralized control implements the algorithm and effects the load balancing among resources as defined by the algorithm. For example, if two Web servers are available to handle a work load, a third server may determine the Web server that will handle the work (i.e., control routing of service tasks).
  • This function is often the case with Domain Name System (DNS) load-balancing systems, such as Cisco System, Inc.'s DistributedDirector™. A client simply issues a request to connect to a domain name site, such as www.microsoft.com. This request is routed to a DNS server, which selects the appropriate Web server address. The manner in which the DNS server selects a Web server address is determined by a specific load-distribution algorithm implemented in the DNS server. Once determined, the Web server address is returned to the client, which then initiates a connection request to the Web server address. Thus, the DNS server is the central controller of the load-balancing system that directs traffic throughout the cluster.
  • To avoid redirecting the client and requiring a second connection request, some hardware load balancers use a centralized technique called Network Address Translation (NAT). NAT is often included as part of a hardware router used in a corporate firewall. NAT typically provides for the translation of an Internet Protocol (IP) address used within one network known as the “outside network” (such as the Internet) to a different IP address employed within another network known as the “inside network” (such as a local area network comprising a cluster of resources). As with a DNS server, NAT devices typically have a variety of load-balancing algorithms available to accomplish dynamic mapping, including round robin, weighted round robin, and least connections.
  • NAT can also be used in conjunction with policy routing. Policy routing is a routing technique that enables network administrators to distribute traffic among multiple paths based on the traffic characteristics. Instead of simply routing based upon the destination address, policy-based routing enables network administrators to determine and implement routing policies to allow or deny paths in accord with parameters such as the identity of a particular end system, the application requested, the protocol used, and the size of data packets.
  • Another approach to balance client requests employs a content-smart switch. Like NAT devices, content-smart switches are typically a form of router inserted between clients and Web servers. These switches typically use tags from a client's HTTP request, or use information from cookies stored on the client to determine the Web server to which the client request will be relayed. For example, if a client request tag or cookie identifies the client as a “premium” customer, then the switch will route the client to a Web server that is reserved for premium customers. However, if a cluster of Web servers are reserved for premium clients, then other techniques must still be used to balance the load of premium clients among the cluster of reserved Web servers.
  • Central load-balancing control is easy to implement and maintain, but is not inherently fault-tolerant and usually requires backup components. In each example above, a backup DNS server, NAT device, and content-smart switch would be required for each corresponding cluster to continue operating if the primary controller failed. Conversely, distributed load-balancing control provides redundancy for fault tolerance, but it requires coordination between the resources in a cluster. Each resource must be aware of the load on the other resources and/or on the cluster as a whole to be capable of managing the load, if necessary.
  • Distributed load-balancing control can be implemented in hardware, but usually still requires backup components. In contrast, software load-balancing systems can be distributed among each node in the cluster. Although each node must use some of its resources to coordinate the load-balancing function, distributed software load balancing eliminates the cost of, and reliance on, intermediary hardware. Alternatively, distributed software load balancing among each node can be used in addition to intermediary routers/balancers.
  • Popular software load-balancing systems include Microsoft Corporation's WINDOWS NT™ Load Balancing Service (WLBS) for WINDOWS NT™ Server Enterprise Edition, and the corresponding upgrade version, called Network Load Balancing (NLB), which is a clustering technology included in the WINDOWS™ 2000 Advanced Server and Datacenter Server operating systems. Both use a fully distributed software architecture. For example, an identical copy of the NLB driver runs on each cluster node. At each cluster node, the driver acts as a filter between the node's network adapter driver and its Transmission Control Protocol/Internet Protocol (TCP/IP) stack. A broadcast subnet delivers all incoming client network traffic to each cluster node, which eliminates the need to route incoming packets to individual cluster nodes. The NLB driver on each node allows a portion of the incoming client network traffic to be received by the node. A load-distribution algorithm on each node determines which incoming client packets to accept. This filtering of unwanted packets is faster than routing packets (which involves receiving, examining, rewriting, and resending). Thus, NLB typically delivers higher network throughput than central control solutions.
  • In conjunction with control of the load-balancing function, load-distribution algorithms determine the distribution of loads throughout a cluster. Unsophisticated algorithms may do nothing more than distribute load by sequentially routing incoming client requests to each successive resource node (i.e., a round robin technique). More generally, a round robin algorithm is a centralized method of selecting among elements in a group in some rational order, usually from the top of a list to the bottom of the list, and then starting again at the top of the list. Another application of the round robin technique is in computer microprocessor operation, wherein different programs take turns using the resources of the computer. In this case, execution of each program is limited to a short time period, then suspended to give another program a turn (or “time-slice”). This approach is referred to as round robin process scheduling.
  • By extension to Internet server farms, a Round Robin Domain Name System (RRDNS) enables a limited form of TCP/IP load balancing. As suggested by the above description of the DNS server model, RRDNS uses DNS to map incoming IP requests to a defined set of servers in a round robin fashion. Thus, the load balancing is accomplished by appropriate routing of the incoming requests.
  • Other algorithms for implementing load balancing by distributed routing of incoming requests include weighted round robin, least connections, and random assignment. As the name suggests, weighted round robin simply applies a weighting factor to each node in the list, so that nodes with higher weighting factors have more requests routed to them. Alternatively, the cluster may keep track of the node having the least number of connections to it and route incoming client requests to that node. The random (or statistical) assignment method distributes the requests randomly throughout the cluster. If each node has an equal chance of being randomly assigned an incoming client request, then the statistical distribution will tend to equalize as the number of client requests increases. This technique is useful for clusters that must process a large number of client requests. For example, NLB uses a statistical distribution algorithm to equalize Web server clusters.
  • Some of the above-noted algorithms may be enhanced by making distribution decisions based on a variety of parameters, including availability of specific nodes, node capacity for doing a specific type of task, node processor utilization, and other performance criteria. However, each of the above-described systems and algorithms considers each client request equally, independent from other client requests. This manner of handling independent requests, and the nodes that service them, is referred to as “stateless.” Stateless resource nodes do not keep track of information related to client requests, because there is no ongoing session between the client and the cluster. For example, an individual Web server, in a cluster of Web servers that provide static Web pages, does not keep track of each client making a request so that the same client can be routed again to that particular Web server to service subsequent requests.
  • However, it is not uncommon for clusters to provide some interactive service to clients and retain information related to a client request throughout a client session. For example, many clusters servicing E-commerce maintain shopping cart contents and Secure Socket Layer (SSL) authentication during a client session. These applications require “stateful nodes,” because the cluster must keep track of a client's session state. Stateful nodes typically update a database when serving a client request. When multiple stateful nodes are used, they must coordinate updates to avoid conflicts and keep shared data consistent.
  • Directing clients to the same node can be accomplished with client affinity parameters. For example, all TCP connections from one client IP address can be directed to the same cluster node. Alternatively, a client affinity setting can direct all client requests within a specific address range to a single cluster node. However, such affinities offset the balance of the load in a cluster. In an attempt to maintain as much load balance of client requests as possible while maintaining stateful client sessions on a node, often, a first-tier cluster of stateless nodes are used to balance new incoming client requests, and a second-tier cluster of stateful nodes are used to balance the ongoing client sessions. Also, a third-tier cluster is often used for secure communication with databases. For example, E-commerce Web sites typically use NLB as a first-tier load-balancing system, in conjunction with Component Object Module Plus (COM+) on the second tier, and Microsoft™ Cluster Service (MSCS) on the third tier.
  • However, the above systems Still consider each client request independent from the requests of all other clients. In some cases, there is a need to group certain requests and concomitant processing services, and maintain the group during the processing, even though the requests originate from different clients. For example, in online multi-player computer games, such as Hearts, it is beneficial to direct a number of client game players to a common node and process the game service requested by those clients on that node throughout the entire play of the game. Doing so increases the speed and likelihood of matching interested client players together in a game and maintains continuity of game play. If players are not directed to a common node, one or two players will be left waiting to play at several different nodes when these individuals could already be involved in playing the game if they had been directed to a single node. Also, keeping a group of players together on a single resource or node eliminates delays that would be caused if the processing of the game service for those players is shared between different nodes in the cluster.
  • Although the need to group players on a single resource is important, it remains desirable to balance the overall processing load represented by all groups of players and various game services (or other processing tasks) being implemented by a cluster among the nodes of the cluster to most efficiently utilize the available processing resources. It is also desirable to be able to scale the load and tolerate faults by dynamic changes to the number of resources in the cluster. Microsoft™ Corporation's Gaming Zone represents a cluster of nodes in which multiple players must be allocated in groups that achieve such a desired balance between different available processing nodes. Previous load-balancing hardware and software techniques have not provided for grouping client requests for a related task on a specific resource node. Accordingly, a technique was required that would both group such related tasks and still balance the overall processing load among all available resource nodes on a cluster.
  • SUMMARY OF THE INVENTION
  • The present invention satisfies the need to group and retain clients on a common resource so long as the processing service they require is provided, while distributing the processing load among resources to achieve efficient utilization of the resources in a cluster. In the present invention, a hybrid of stateless and stateful load balancing is employed, using distributed software for decentralized control, and an integrated distribution algorithm for determining the resource that will process client requests. For relatively light traffic, the present invention eliminates the need for multi-tier load-balancing systems, and for high traffic, reduces the number of first-tier stateless nodes required by doing preliminary load balancing on new incoming client requests.
  • More specifically, the present invention is directed to a method and system for distributing a processing load among a plurality of service resources in a cluster. A cluster may be either a single node implementing multiple instances of a service resource, or multiple nodes, wherein each node can implement multiple instances of different service resource types. Similarly, a service resource may comprise an individual process, such as a single instance of a computer game, or may comprise an entire node that is executing multiple processes, such as a Server. Thus, as used in the claims that follow, the terms “cluster,” “node,” and “resource” are defined in relative terms, but are not limited to a specific hardware or other fixed configuration. As used herein, a cluster includes a plurality of service resources, and the plurality of service resources may be executed on one or more nodes.
  • The method directs initial connection requests from clients to a single entry-point service resource in the cluster, called an intake. A separate intake is designated for each different type of service that is being provided by the cluster. One or more instances of each type of service are processed by one or more nodes in the cluster. Clients are grouped together and processed in a group at a service resource for as long as the service is provided by the resource. As used herein, a client may be a single client device, such as a personal computer. However, the term client is not intended to be limiting. For example, a client may also be an instance of a function, such as a browser. Thus, multiple instances of a function may run on a single computer, and each instance can be considered an individual client. This may be the case where multiple instances of a browser are run on one personal computer to play a game in multiple groups, or different games, concurrently.
  • As a function of loading, a first service resource that was designated as the intake determines that another service resource in the cluster should become a new intake for subsequent connection requests from clients. The other service resource is then designated as the new intake. New client requests for the service are then directed to the new intake to form a second group of clients. The second group of clients will continue to receive services from the second service resource for as long as the service is provided.
  • Designating a service resource as the intake is preferably done by calculating a rating value for each service resource in the cluster, and then selecting the service resource to be the new intake as a function of the rating value. The selected service resource broadcasts a message to the rest of the resources in the cluster, informing the other resources of its identity as the new intake. Any service resource that later receives a request for service from a new client will then direct the new client to the new intake for that service. The client may simply be given a network address to the new intake and required to initiate a connection to the new intake. Alternatively, the client's connection request may be forwarded directly to the new intake.
  • To distribute the work load throughout the cluster, the service resource that is designated as the current intake first evaluates its own operating conditions to calculate a load value. If the load value exceeds a predetermined threshold, then the intake selects another service resource to be designated as the new intake. The selection is preferably made based on the load value described above. After selecting a new service resource, the current intake service resource broadcasts a message to all other service resources identifying the new intake. The newly designated intake recognizes its new function and accepts connection requests from new clients.
  • As a fault tolerance measure, a service resource will assume the designation as the new intake if that service resource has not received a status message from the current intake within a predetermined period of time. If more than one service resource assumes the designation as the hew intake, then the service resource that will be designated as the new intake is based upon a numerical identifier associated with each of the service resources.
  • Another aspect of the present invention is directed to a machine-readable medium on which are stored machine-executable instructions that, when executed by a processor, cause the processor to perform functions that are generally consistent with the steps of the method described above. A machine readable medium may store machine instructions executed by the cluster of resources, or machine instructions executed by the clients, or both.
  • Yet another aspect of the present invention is directed to a system for distributing work load in a cluster. The system comprises at least one processor for implementing the cluster. Although multiple processors may be more commonly used for a cluster, the system can be implemented and used to balance the load among a plurality of resources (e.g., software objects) executed by a single processor, or by multiple processors, to provide services to a plurality of clients. The system further comprises an interface coupling the processor(s) to the clients. A plurality of service resources are operatively connected to each other and to the clients. Each resource is capable of being designated as an intake that accepts new client requests for a specific service, forming a group of clients that will continue to receive services from the service resource for as long as the services are provided. Machine instructions are stored in a memory that is accessible by the one or more processors. The machine instructions cause the one or more processors to implement functions generally consistent with the steps of the method discussed above.
  • BRIEF DESCRIPTION OF THE DRAWING FIGURES
  • The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is a schematic block diagram of an exemplary Server (PC) system suitable for implementing the present invention;
  • FIG. 2 illustrates the architecture of a preferred embodiment of the present invention in which a cluster of Servers host online game services;
  • FIG. 3 illustrates the logic of one preferred embodiment for a client device that is connecting to the intake for a specific game type in the cluster of FIG. 2;
  • FIG. 4 illustrates the logic of one preferred embodiment for providing a proxy service to manage client connections to a node in the cluster;
  • FIG. 5A illustrates a portion of the load-balancing logic employed on each node in the cluster for handling User Datagram Protocol (UDP) messages; and
  • FIG. 5B illustrates a portion of the load-balancing logic employed on each node in the cluster for handling load distribution and fault tolerance.
  • DESCRIPTION OF THE PREFERRED EMBODIMENT
  • Exemplary Operating Environment
  • FIG. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the present invention may be implemented, preferably in regard to a server that stores and provides Web pages and a client that requests the Web pages and displays them to a user. Although not required, the present invention will be described in the general context of computer-executable instructions, such as program modules that are executed by a computer configured as a Server, and by client computing devices, such as personal computers. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Also, those skilled in the art will appreciate that the present invention may be practiced to balance requests from other client computing devices, including hand-held devices, pocket personal computing devices, digital cell phones adapted to connect to a network, microprocessor-based or programmable consumer electronic devices, game consoles, TV set-top boxes, multiprocessor systems, network personal computers, minicomputers, mainframe computers, industrial control equipment, automotive equipment, aerospace equipment, and the like. The invention may be practiced in a single device with one or more processors that process multiple tasks, but preferably will be practiced in distributed computing environments where tasks are performed by separate processing devices that are linked by a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
  • With reference to FIG. 1, an exemplary system for implementing the present invention includes a general purpose computing device in the form of a conventional Server 20, provided with a processing unit 21, a system memory 22, and a system bus 23. The system bus couples various system components, including the system memory, to processing unit 21 and may be any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of known bus architectures. The system memory includes read-only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system (BIOS) 26, containing the basic routines that are employed when transferring information between elements within Server 20 and during start up, is stored in ROM 24. Server 20 further includes a hard disk drive 27 for reading from and writing to a hard disk (not shown), a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31, such as a CD-ROM or other optical media. Hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical disk drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable machine instructions, data structures, program modules, and other data for Server 20. Although the exemplary environment described herein employs a hard disk, removable magnetic disk 29, and removable optical disk 31, it will be appreciated by those skilled in the art that other types of computer-readable media, which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memories (RAMs), read-only memories (ROMs), and the like, may also be used in the exemplary operating environment.
  • A number of program modules may be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24, or RAM 25, including an operating system 35, one or more application programs 36 (such as a browser program), other program modules 37, and program data 38. An operator may enter commands and information into Server 20 through input devices such as a keyboard 40 and a pointing device 42. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, digital camera, or the like. These and other input devices are often connected to processing unit 21 through an input/output (I/O) device interface 46 that is coupled to the system bus. Output devices, such as a printer (not shown), may also be connected to processing unit 21 through an I/O device interface 46 that is coupled to the system bus. Similarly, a monitor 47 or other type of display device is also connected to system bus 23 via an appropriate interface, such as a video adapter 48, and is usable to display Web pages and/or other information. In addition to the monitor, Servers may be coupled to other peripheral output devices (not shown), such as speakers (through a sound card or other audio interface—not shown).
  • Server 20 preferably operates in a networked environment using logical connections to one or more additional computing devices, such as to a cluster node 49 that is yet another server in a cluster of servers. Cluster node 49 is alternatively a database server, a mainframe computer, or some other network node capable of participating in cluster processing, and typically includes many or all of the elements described above in connection with Server 20, although only an external memory storage device 50 has been illustrated in FIG. 1. The logical connections depicted in FIG. 1 include a local area network (LAN) 51 and a wide area network (WAN) 52. Such networking environments are common in offices, enterprise-wide computer networks, and intranets. Preferably, LAN 51 is a back-end subnet connecting a plurality of resource nodes of the cluster in communication with each other. Preferably, WAN 52 is the Internet, which connects the cluster in communication with a plurality of client computers 55 a, 55 b, etc.
  • Those skilled in the art will recognize that LAN 51 and WAN 52 can be the same network with both resource nodes and client computers connected via only a single network. Client computers 55 a, 55 b, etc. each preferably include many of the elements described above in connection with Server 20. However, as indicated above, client computers 55 a, 55 b, etc. may be a combination of hand-held devices, pocket personal computing devices, digital cell phones, and other types of client computing devices.
  • Server 20 is connected to LAN 51 through a cluster network interface or adapter 53, and to WAN 52 though a client network interface or adapter 54. Client network interface 54 may be a router, modem, or other well-known device for establishing communications over WAN 52 (i.e., over the Internet). Those skilled in the art will recognize that cluster network interface 53 and client network interface 54 may be internal or external, and may be the same, or even a single interface device. Cluster network interface 53 and client network interface 54 are connected to system bus 23, or may be coupled to the bus via I/O device interface 46, e.g., through a serial or other communications port.
  • In a networked environment, program modules depicted relative to Server 20, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other techniques for establishing a communications link between the computers may be used, such as wireless communications.
  • Exemplary Implementation of the Present Invention
  • The following describes exemplary implementations of several preferred embodiments. FIG. 2 illustrates the architecture of one preferred embodiment of the invention wherein a cluster of Servers host online game services.
  • Cluster 100 generally comprises multiple nodes, such as node 102 and node 104. Each node is operatively connected in communication with clients via a wide area network, such as the Internet 106, and operatively connected to a local area network, such as back-end subnet 108. Each no-de can host multiple types of online game such as Hearts, Checkers, Spades, Backgammon, and Reversi.
  • Each type of game will typically be implemented in multiple instances on the game service. For example, Hearts has multiple instances identified as Hearts game services 110 aa, 110 ab, etc. Similarly, Checkers has multiple instances implemented as Checkers game services 112 aa, 112 ab, etc. Each game service thus supports multiple instances of the game for that game service. For example, Hearts game service 110 aa might support two hundred games of Hearts, each match including four players. However, to simplify FIG. 2, each game service is illustrated as representing only one match. Also, each type of game may not have equivalent numbers of game service instances. Some nodes may not provide services for a particular type of game at all. The number of game service instances on a node for a particular type of game typically is indicative of the number of clients requesting that game. For illustrative purposes, node 104 has Hearts game services 110 ba and 110 bb, as well as Checkers game services 112 ba and 112 bb, reflecting that a sufficient number of clients requested Hearts and Checkers to warrant multiple instances of each game service.
  • To manage client communications with the game services, each node in the cluster implements its own proxy service, as illustrated by proxy services 114 a and 114 b, and the proxy services for the nodes are all identical in functionality. Each proxy service also accesses a dynamic link library (DLL) that comprises a load balancer as illustrated by dynamic link libraries 116 a and 116 b.
  • A single instance of the proxy service and load balancer could control communication and load balancing for the entire cluster if all client and game service communications were routed to the single proxy service and load balancer, respectively. Such an approach would provide a centralized control embodiment of the present inventions. Preferably, however, a proxy service and a load balancer run on each node to provide fault-tolerant distributed control. Each proxy service manages TCP connections to Internet 106. Multiple TCP connections 118 aa, 118 ae, 118 ah, 118 ba, etc. are thus illustrated. These TCP connections enable communication between client devices (not shown) and the various instances of the game services.
  • The proxy service on each node maintains connections with some clients for a short time, although those clients may not currently be communicating with a game service. For example, TCP connection 118 ae represents a client connection in which the client dropped out of Hearts game service 110 ab, but wanted to be matched with a new set of players for another game. The remaining players may continue playing if the game service involved supports artificial intelligence, so that the game service provides a “computer player” as a replacement. After dropping out of Hearts game service 110 ab, the player at TCP connection 118 ae continues to communicate with proxy service 114 a to find out which node has a new Hearts game service that the player can join, and proxy service 114 a will direct the player to Hearts game service 110 bb for a new match. Similarly, the player on TCP connection 118 ah dropped out of Hearts game service 110 ab, but reconnected to Checkers game service 112 aa.
  • The game services maintain User Datagram Protocol (UDP) communication with load balancer 116. UDP connections 120 aa, 120 ba, etc. convey various status messages from each game service to the corresponding node's load balancer. Each load balancer 116 maintains UDP connections 122 a, 122 b, etc., using back end subnet 108.
  • One game service instance of each type of game is designated as an “intake” for the cluster. The intake for a specific type of game serves as a central point for new client requests and groups, or matches, clients into a game. For example, Checkers game service 112 aa is designated as the Checkers intake 124 for the Checkers game type. Similarly, Hearts game service 110 bb is designated as the Hearts intake 126 for the Hearts game type. Any game service instance can be designated the intake for that game type regardless of which node within the cluster is executing the designated game service instance. However, only one game service instance can be designated as the intake for each type of game. A game service instance that is designated as the intake for a game type is responsible for accepting new client requests to participate in that type of game.
  • FIG. 3 illustrates the logic of a preferred embodiment that facilitates the connection of a client device to the intake of a type of game provided on a cluster like that described above. Although the present invention does not require the client device to have any special capabilities or to store communication addresses to games hosted by the cluster, one preferred embodiment incorporates additional capabilities and provides addresses accessed by the operating systems of the client devices to automate connecting to and communicating with the cluster. For example, the Microsoft WINDOWS™ Millennium Edition Operating System includes a list of DNS names and associated IP addresses of cluster nodes in Microsoft Corporation's MSN GAME ZONE™, which are configured to run the game types discussed above. Thus, when a user selects a type of game on the client device that the user wants to play online, the operating system picks the corresponding DNS name and an associated IP address to initiate communication with the cluster so that the user can play the selected game.
  • As illustrated by a step 150, the client device attempts to connect to the selected game IP address in the cluster. At a decision step 152, the client device determines whether the connection attempt has failed. If so, then the client device determines at a decision step 154 whether additional IP addresses that correspond to other nodes in the cluster for the selected game type are stored on the client device. If an additional IP address is available for the selected game type, the client device returns to step 150 to attempt to connect to the new IP address. If no additional IP addresses are available for the selected game type, then the client device retrieves a generic IP address for the cluster from its list of DNS names. A generic IP address is registered to the general DNS name of the cluster, which has all nodes in the cluster registered to it. Thus, even if only one node in the cluster remains functional, the client will eventually find it. The client then attempts to connect to the generic IP address at a step 156. At a decision step 158 a client device determines whether this connection has failed. If so, the client device determines at a decision step 160 whether any additional generic IP addresses to the cluster are stored on the client device. If so, the client device again attempts to connect to the cluster at the new generic IP address via step 156. If no additional generic IP addresses are available, then the connection attempts are ended, and a notice of the failure is provided to the user.
  • Once a communication connection is established between the client device and the cluster via the Internet, the client device makes a request to participate in the selected game type at a step 162. If the IP address of the node in the cluster to which the client device has connected is not the node hosting the game instance currently designated as the intake for the selected game type (sometimes referred to as the intake node), the client device will receive the correct IP address for the node in the cluster that is hosting the game instance designated as the intake for the selected game. At a decision step 164, the client device determines whether it has been given a different IP address for redirecting the client device to the correct node currently designated as the intake for the requested game. If the client device receives a new IP address, the client device redirects connection with the client to the new intake address at a step 166. The client device then monitors the communication to detect any failure at a decision step 168. If the communication connection fails at this point, then the client device again attempts to establish communication with one of the IP addresses associated with the DNS name for the selected game at step 150. If, however, the client device successfully connects to the new intake IP address, the client device again requests game service at step 162.
  • As long as the client device is not redirected again because the intake moved before the client device's game request was received, the client device has connected to the current intake for the selected type of game and will not be redirected at decision step 164. The client device then waits at a step 170 for a match to be made with other client devices requesting the same selected game. If, while waiting for a match, communication is disconnected between the client device and the intake, the client device detects the disconnection at a decision step 172. If communication is disconnected, the client device receives the most recent intake address for the selected game and a disconnection notice from the cluster at a step 174. If no interruption occurs in the communication, and sufficient other client devices are connected to the intake for the selected game, a match is formed by that intake and the selected game is played at a step 176.
  • While the game is being played, at a decision step 178, the client device monitors the communications to detect any interruption in the game service or in the communication with the cluster. If communication is disconnected, the service fails, or the game finishes, then the client device receives the IP address for the current intake and a disconnect notice from the cluster at a step 180. At a step 182, the client device then displays a notice to the user indicating that the game has terminated and requesting whether the user wishes to play the game again. If so, the client device attempts to connect to the new IP address at step 166. If the user does not wish to play another game, the process ends.
  • The user may also terminate the game while connected to the cluster. The client device detects an indication that the user wishes to exit the game at a decision step 184. If the user has elected to exit, the process ends. However, if the user does not indicate any desire to end the game, the game continues for as long as the same group of matched client players continues to play the game at step 186 and an interruption in communications does not occur. If the group of matched client players disbands, but the user wishes to continue playing the same game type, then the client device is directed to the current intake for the selected type of game in order to be rematched into a new game. The client device requests to be disconnected from the current game service at a step 190, which enables the client device to receive the IP address for the current intake of the selected game type at step 174. Reconnection to the current intake then proceeds as described above.
  • On the cluster side of this process, FIG. 4 illustrates the logic implemented by a proxy service in a preferred embodiment, to manage client connections with a node in the cluster. At a step 200, the proxy service accepts an initial TCP connection from a client. The proxy service then waits for a game request from the client at a step 202. At a decision step 204 the proxy service determines whether the client connection has been dropped. If so, the service to that client comes to an end.
  • However, if the proxy service successfully receives a game request without the connection being dropped, then at a step 206, the proxy service requests an address of the intake for the requested game type. The proxy service makes this request to the load balancer component of the proxy service on the local node executing the proxy service. When the proxy service receives an intake address from the load balancer, at a decision step 208, the proxy service determines whether the intake address is on the local node (i.e., the node on which the proxy service is executing) or on a remote node. If the intake address is not on the local node, then the proxy service sends a message to the client at a step 210, including the IP address of the node with the intake for the requested game type. The proxy service then disconnects from the client at a step 212, ending its service to this client.
  • Once the client connects to the node with the intake for the requested game type, the local proxy service of that node establishes a TCP connection to the intake at a step 214. The intake matches the client with other waiting clients for an instance of the requested game. This function is considered a stateful connection, because the same game service continues to service this group of clients who are playing in a game instance, until the game is completed, the game is prematurely terminated, or the connection is interrupted. Thus, at a step 216, the proxy service passes messages between the same game service and the client throughout the game session for that client. While the game session continues, the proxy service determines whether the client has dropped out of the session at a decision step 218. If so, the service to that client is terminated.
  • While the client remains in the game session, the proxy service also determines whether the game session has been prematurely dropped at a decision step 220. A game session may be dropped if another client drops out of the game and the game instance does not permit an artificial intelligence game player (i.e., the computer) to take over. The game session may also be dropped if the game session fails, the node is removed from the cluster, or if other interruptions occur.
  • If a game interruption occurs, the proxy service again requests the load balancer for the current intake address of the selected game type at a step 222. When the proxy service receives the intake address from the load balancer, the proxy service sends a message to the client at a step 224, notifying the client that the game is over, and providing the client with the current intake address. At a decision step 226, the proxy service determines whether the current intake address corresponds to a local game service (i.e., on the same node as the proxy service). If the current intake for the selected game is on the local node, the proxy service returns to step 202 to wait for the client to request another game. However, if the current intake for the selected game is on a remote node, the proxy service disconnects the client at a step 212, requiring the client to reconnect to the node designated as the current intake for the game selected by the client.
  • While the game is in session, the proxy service monitors to determine if the client has requested a new instance of the same type of game at a decision step 228. So long as no such request is made, the proxy service continues to relay messages between the client and the game service, as indicated at a step 216. If, however, the client requests a new game instance of the same game type, the proxy service returns to step 206 to obtain the address for the current intake for the selected game type. The process then continues as before.
  • FIG. 5A illustrates the load-balancing logic that occurs on each node in the cluster. Load balancing is initialized at a step 250. At a decision step 252, the load balancer detects any UDP messages from the game services on that node, or from other load balancers on other nodes in the cluster. While there are no incoming UDP messages, the load balancer passes control at a continuation step A to perform the steps illustrated in FIG. 5B (discussed later). When the load balancer detects a new UDP message, the load balancer reads the UDP message at a step 254.
  • The load balancer then determines which type of UDP message it has received. In a preferred embodiment, there are four types of UDP messages. One type of UDP message is a “service” message from each game service on the node. Approximately every two seconds, each game service transmits a service message to the load balancer to inform the load balancer of the status of the game service. Preferably, each service message includes a unique name or ID for the game service that is transmitting the service message, the IP address, the TCP port of the game service, the current population of clients being served by the game service, and an indication of whether the game service will accept additional clients. Each game service may accommodate clients for a single game or clients for multiple games. It would not be unusual to handle more than 1,000 clients per game service. The load balancer maintains a table that includes the state information described above for each game service. Thus, if the load balancer determines at a decision step 256 that the incoming UDP message is a service message, the load balancer updates its table of service states at a step 258. Updating the table includes adding a new game service from the table if a service message is received from a newly started game service.
  • Another type of UDP message is a “data” message that is received from the other nodes in the cluster. Each data message includes information such as a unique node ID, the node IP address, the total load on a node, a list of all game services running on that node, and an indication of whether the node is accepting new client connections. Preferably, each node sends a data message to the other nodes in the cluster approximately every two seconds in this embodiment. If a data message is not received from a remote node in the cluster within approximately eight seconds, in this example, the remote node is assumed to have failed. In a manner similar to the table of service states, each load balancer maintains a table of node states, including the information described above, for all nodes in the cluster. Thus, if the load balancer for a node determines at a decision step 260 that the UDP message is a data message, the load balancer for the node updates its table of node states at a step 262. Updating includes adding previously unknown nodes that have recently been activated on the cluster.
  • Another type of UDP message is an “intake” message, which is received from a remote node in the cluster. An intake message indicates that the remote node in the cluster has decided that the intake for a particular game type should be assigned to a node other than the remote node. The intake message includes identification (e.g., the node IP address and game service TCP port) of the node and of the particular game service that the remote node has determined to designate as the intake for the particular game type. Thus, at a decision step 264, the load balancer on a node determines whether a UDP message is an intake message, and if so, further determines at a decision step 266 whether the node has been selected to take over the intake for a given game type.
  • The fourth kind of UDP message in this exemplary preferred embodiment is a “heartbeat” or status message from a node that currently is designated as an intake for any one of the game types. If none of the other three types of UDP messages are detected, the load balancer assumes that the UDP message is a heartbeat or status message at a step 268. A heartbeat or status message includes information such as the game ID, node IP address, and game service TCP port. Receipt of successive heartbeat or status messages assures all other nodes that the intake for a game type is still functioning. Preferably, a heartbeat or status message is received approximately four times per second for each game type in this embodiment. If a heartbeat message is not received within approximately one second, then one of the prospective receiving nodes assumes that the intake has failed for that game type and assumes control of the intake for that game type. Thus, when a node receives a heartbeat message, the load balancer for that node must determine at a decision step 272 whether the node already has control of the intake for the game type reflected in the heartbeat message. If both the heartbeat message received from another node and the load balancer of the local node indicate that both the other node and the local node control the intake for a particular game type, then the conflict is resolved at a decision step 274. Resolution of the conflict occurs by providing that the node with the lowest ID number will control the intake for a game type. If there was no conflict, or if the local node relinquishes control of the intake designation after resolution of a conflict, then the load balancer updates a table of intake locations at a step 270 a.
  • If a local node determines at decision step 266 that a remote node sent an intake message designating a particular game service on the local node as the new intake for a game type, then the local node load balancer determines at a decision step 276 whether the designated game service is still available. If so, the designated game service assumes control as the intake and sends a heartbeat message at a step 278 a. If the designated game service is unavailable, the load balancer selects a new game service to be the new intake for that game type at a step 280 a. If the new intake game service resides on the local node, as determined at a decision step 282 a, then the new intake game service becomes the intake and sends a heartbeat message at step 278 a. If, however, the game service newly selected as the intake resides on a remote node in the cluster, then the load balancer broadcasts an intake message to the other nodes at a step 284 a. The local node then updates its table of intake locations at a step 270 a. Finally, the local node waits for another UDP message at decision step 252.
  • Continuation point B indicates the transfer of control from the logical steps illustrated in FIG. 5B to a step 286, where the load balancing process waits for a loop time-out before going forward to loop again through the steps of FIGS. 5A and 5B. This pause exists to throttle the load-balancer's use of the processor so as not to starve the other services running on the machine. When this time-out finishes, the process proceeds to a decision step 288, where the load balancer determines whether the load balancer service is to shut down. If so, the load-balancing process ends. If not the load balancer continues to monitor the subnet for incoming UDP messages at a decision step 252.
  • FIG. 5B illustrates the steps performed while no UDP messages are incoming. Continuation point A indicates that control is passed from the corresponding point in the logic illustrated in FIG. 5A to a decision step 300, where the load balancer determines whether it is time to send an updated data message to the rest of the nodes in the cluster. If so, the load balancer first updates its list of local game services at a step 302 by removing any game services from its list that failed to send a service message to the load balancer within the allocated time. For example, if a service message is not received from a particular game service within six seconds, the node assumes the game service has failed, and the failed game service is removed from the load balancer's service list.
  • The load balancer calculates the load on the node at a step 304. Preferably, this load calculation is accomplished via a script function that can be dynamically changed while the node is running, to adjust the behavior of the load balancer. In computing a “load value” for the node, the load calculation script uses information such as the node's available memory size, CPU utilization, the total number of clients being serviced on the node, a preset target population of clients, and a preset maximum population of clients. Use of preset, but adjustable parameters enable network administrators to tune each node. Preferably, the load value is also normalized to a range from 0-1000. In this manner, all nodes in the cluster can be compared uniformly even if each node in the cluster has different processing capabilities. The load balancer then assembles a data message and broadcasts the data message to other nodes in the cluster at a step 306. At a step 308, the load balancer removes any node information from its table of node information related to any node that failed to send a data message within the allocated time.
  • The load balancer performs its load-distribution algorithm to determine whether an intake should be moved from the node to another node, and if so, selects the game service on a remote node that should be designated as the new intake for a game type. At a decision step 310, the load balancer first determines whether to move an intake to another node. To make this determination, the node's load balancer performs another script function. A variety of criteria can be employed in the function used to make the decision. For example, the function may simply be a timer, causing the intake to move at predetermined intervals. Preferably, however, the function uses parameters such as the current client population of the intake game service, the length of time the intake has been situated at that game service, whether the local node has any special affinity for the game type, and the previously computed load value for the node. The function can also use information about the condition of other game instances and other nodes to determine whether the game instance that is currently designated as the intake should give up that designation in favor of another game instance. For example, the function could consider the calculated load on other nodes, or the current client population of other instances of the same game type on other nodes, or on the same node as the current intake. Preferably, however, the function considers only the condition of the current intake, so as not to move the intake around so often that the frequent change of game instances designated as the intake slows the formation and processing of games.
  • Nevertheless, the script can be modified at any time to provide flexibility to a cluster administrator. Both the logic and threshold values that contribute to the decision can be dynamicly configured independently on each node. For example, a cluster administrator could configure the maximum client population allowed per game service, the maximum time allowed for a game service to be designated as the intake, the maximum load value allowed for a node, and the minimum time allowed between intake moves to control the redesignation of the intake and thus, to balance the processing load as desired.
  • If the load balancer determines that it is inappropriate to move an intake at the present time, the load balancer moves on to a step 312 to check for time-out of heartbeat messages and service messages. If, however, the load balancer determines that the game instance designated as the intake should now be changed, the load balancer performs a third script function at a step 280 b to calculate a “rating value” for each instance of the game service of the applicable game type on each node in the cluster. This function also uses parameters such as the client population of each game service for the applicable game type and the load value of each node. In addition, the function may also include an affinity parameter that gives higher weight to a node configured so as to be given preference for a particular game type. The game service with the highest rating value in the cluster is then selected to be the new intake for the game type.
  • If the newly selected intake for a game service resides on the same node as the previous intake for the game service, then at a decision step 282 b the load balancer determines whether the load balancer need only update its own table of intake locations at a step 270 b. However, if the newly selected intake for the game service is at a remote node, the load balancer broadcasts an intake message to all of the nodes in the cluster at a step 284 b. After the intake message is sent, the load balancer then updates its table of intake locations at step 270 b.
  • The load balancer then checks for time-outs of any heartbeat (status) of service messages at a step 312. This check is to determine whether any intake game services have failed to issue the expected heartbeat or service message within the allocated time. Determination of whether an intake game service has expired is made at a decision step 314. If the load balancer has not received an expected heartbeat or service message within the allocated time, then at a decision step 282 c, the load balancer determines whether the expected intake game service resides on the local node. If the expected intake for a game service does reside on the local node, the load balancer assumes that the expected intake game service has failed and selects a new intake game service at a step 280 c, as described above. At a decision step 282 d, the load balancer determines whether the newly selected intake game service also resides on the local node. If so, the load balancer updates its table of intake locations at a step 270 c. If, however, the newly selected intake game service resides on a remote node, then the load balancer broadcasts an intake message to all nodes in the cluster at a step 284 c. The load balancer then updates its list of intake locations at step 270 c.
  • If decision step 282 c determines that the expected intake for a game service does not reside on the local node, the load balancer assumes that the other node has failed and assumes control of the intake for that game type at a step 316. As part of this step, the load balancer also selects a game service on the local node to be the intake game service based on the ratings for the local game services. The load balancer then updates its table of intake loactions at step 270 c. As suggested, although the selection and update of a newly selected intake game service is the same as described earlier with regard to the load-distribution algorithm, the selection and update at steps 280 c and 316 are a result of a failure rather than the result of a decision to better distribute the load. Thus, the selection and update at these points are a function of a fault tolerance that is built into the load balancer. This fault tolerance is also reflected in FIG. 5A, beginning at decision step 276.
  • Once a new intake game service is selected or if all expected heartbeat and service messages are received in the allocated time, at a decision step 318, the load balancer determines whether any intake game services reside on the local node. If so, then the load balancer broadcasts a heartbeat at a step 278 b, and proceeds to continuation point B. Similarly, if no intake game services reside on the local node, the load balancer proceeds to continution point B.
  • Although the present invention has been described in connection with the preferred form of practicing it, those of ordinary skill in the art will understand that many modifications can be made thereto within the scope of the claims that follow. Accordingly, it is not intended that the scope of the invention in any way be limited by the above description, but instead be determined entirely by reference to the claims that follow.

Claims (21)

1-36. (canceled)
37. A method for responding to a plurality of different types of incoming messages for load-balancing at each node included in a cluster of nodes, wherein the plurality of different types of incoming messages are received from a plurality of sources in the cluster, including from other nodes, comprising the steps of:
(a) at each node, determining a type of the incoming message that has been received; and
(b) determining an appropriate response to be performed by a load-balancer running on the node, for each incoming message, based on one of a frequency, a content, and the type of the incoming message.
38. The method of claim 37, further comprising the step of employing information comprising each of the incoming messages to balance a load on a local node relative to loads on other nodes of the cluster.
39. The method of claim 37, wherein the plurality of different types of the incoming messages comprises a service message that is received from a game service on a local node, the service message including:
(a) an identification of the game service and its port on the local node;
(b) a number of clients being served by the game service on the local node; and
(c) an indication of whether additional clients will be served by the game service of the local node.
40. The method of claim 39, wherein the step of determining the appropriate response for the service message includes the step of updating information maintained by the local node for all games services on the local node to reflect information included in the service message.
41. The method of claim 37, wherein the plurality of different types of the incoming message comprises a data message that is received from a different node, the data message including:
(a) an identification of the different node;
(b) the total load on the different node;
(c) a list of game services served by the different node; and
(d) an indication of whether additional clients will be served by the different node.
42. The method of claim 41, wherein the step of determining the appropriate response for the data message includes the steps of:
(a) updating information maintained by a local node for all different node states on the cluster to reflect information included in the data message; and
(b) assuming that the different node has failed if the data message has not been received within a predefined interval by the local node.
43. The method of claim 37, wherein the plurality of different types of the incoming message comprises an intake message that is received from a different node, the intake message including:
(a) an identification of the different node; and
(b) an identification of an intake for a game type that the different node has determined should be assigned to a node in the cluster other than the different node.
44. The method of claim 43, wherein the step of determining the appropriate response for the intake message further includes the steps of:
(a) updating information maintained by the local node for all intake locations on the cluster to reflect information included in the intake message;
(b) determining whether the local node has been assigned by the different node to take over the intake for the game type, and if so;
(c) determining whether the local node can provide the game service for the game type, and if so, enabling the game service on the local node to assume control as the intake for the game type and sending a status message indicative thereof to other nodes of the cluster; else
(d) selecting a different game service on yet another node in the cluster to be the intake for the game type.
45. The method of claim 44, wherein the step of selecting the different game service includes the step of sending an intake message to other nodes of the cluster indicating that the game type is being assigned to the different game service on the node that was selected.
46. The method of claim 37, wherein the plurality of different types of the incoming message comprises a status message that is received from an intake node for a game type, the status message including:
(a) an identification of the intake node; and
(b) an identification of the game type.
47. The method of claim 46, wherein the step of determining the appropriate response for the status message includes the steps of:
(a) updating information maintained by a local node for all intake locations on the cluster to reflect information included in the status message;
(b) assuming that a reassignment of the intake has failed for the game type and assuming control of the intake for the game type if a status message for the game type has not been received within a predefined interval; else
(c) if the status message has been received within the predefined interval, determining that reassignment of the intake for the game type was successful.
48. The method of claim 47, further comprising the steps of:
(a) determining if both the local node and another node are asserting control of the intake for the game type; and if so,
(b) applying an arbitrary rule to resolve which of the local node and the other node shall be a controller for the intake for the game type.
49. A system for responding to a plurality of different types of incoming messages for load-balancing at each node included in a cluster of nodes, comprising:
(a) a memory in which a plurality of machine instructions are stored;
(b) at least one processor for implementing the cluster of nodes, each said at least one processor being coupled to the memory, said at least one processor executing the plurality of machine instructions, causing a plurality of functions to be implemented, said plurality of functions including:
(i) enabling the plurality of different types of incoming messages to be received from a plurality of sources in the cluster, including from other nodes;
(ii) determining a type of the incoming message that has been received at each node; and
(iii) determining an appropriate response to be performed by a load-balancer running on the node, for each incoming message, based on one of a frequency, a content, and the type of the incoming message.
50. The system of claim 49, wherein execution of the plurality of machine instructions further causes information comprising each of the incoming messages to be employed to balance a load on a local node relative to loads on other nodes of the cluster.
51. The system of claim 49, wherein the plurality of different types of the incoming messages comprises:
(a) a service message that is received from a game service on a local node, said service message including an identification of the game service and its port on the local node, a number of clients being served by the game service on the local node, and an indication of whether additional clients will be served by the game service on the local node;
(b) a data message that is received from a different node, said data message including an identification of the different node, the total load on the different node, a list of game services served by the different node, and an indication of whether additional clients will be served by the different node;
(c) an intake message that is received from a different node, said intake message including an identification of the different node and an identification of an intake for a game type that the different node has determined should be assigned to a node in the cluster other than the different node; and
(d) a status message that is received from an intake node for a game type, said status message including an identification of the intake node and an identification of the game type.
52. The system of claim 51, wherein execution of the machine instructions further causes information maintained by the local node for all games services on the local node cluster to be updated to reflect information included in the service message.
53. The system of claim 51, wherein execution of the machine instructions further causes information maintained by the local node for all different node states on the cluster to be updated to reflect information included in the data message, wherein the machine instructions cause the local node to determine that the different node has failed if the data message has not been received from the different node by the local node within a predefined interval.
54. The system of claim 51, wherein execution of the machine instructions further causes:
(a) information maintained by the local node for all intake locations on the cluster to be updated to reflect information included in the intake message;
(b) a determination of whether the local node has been assigned by the different node to take over the intake for the game type; and if so,
(c) a determination of whether the local node can provide the game service for the game type; and if so,
(d) enabling the game service on the local node so that it assumes control as the intake for the game type and sends the status message indicative thereof to other nodes of the cluster; otherwise,
(e) selecting a different game service on yet another node in the cluster to be the intake for the game type, and sending the intake message to other nodes of the cluster indicating that the game type is being assigned to the different game service on the node that was selected.
55. The system of claim 51, wherein execution of the machine instructions further causes:
(a) information maintained by a local node for all intake locations on the cluster to be updated to reflect information included in the status message;
(b) a determination that a reassignment of the intake has failed for the game type resulting in the local node assuming control of the intake for the game type, if the status message for the game type has not been received within a predefined interval; and otherwise,
(c) a determination that reassignment of the intake for the game type was successful if the status message has been received within the predefined interval.
56. The system of claim 55, wherein execution of the machine instructions further causes an arbitrary rule to be applied, to resolve which of the local node and another node shall be a controller for the intake for the game type, if it is determined that both the local node and the other node are both asserting control of the intake for the game type.
US11/101,777 2001-02-06 2005-04-07 Distributed load balancing for single entry-point systems Active 2022-05-22 US7395335B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/778,223 US7155515B1 (en) 2001-02-06 2001-02-06 Distributed load balancing for single entry-point systems
US11/101,777 US7395335B2 (en) 2001-02-06 2005-04-07 Distributed load balancing for single entry-point systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/101,777 US7395335B2 (en) 2001-02-06 2005-04-07 Distributed load balancing for single entry-point systems

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/778,223 Continuation US7155515B1 (en) 2001-02-06 2001-02-06 Distributed load balancing for single entry-point systems

Publications (2)

Publication Number Publication Date
US20050198335A1 true US20050198335A1 (en) 2005-09-08
US7395335B2 US7395335B2 (en) 2008-07-01

Family

ID=34911347

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/778,223 Expired - Fee Related US7155515B1 (en) 2001-02-06 2001-02-06 Distributed load balancing for single entry-point systems
US11/101,777 Active 2022-05-22 US7395335B2 (en) 2001-02-06 2005-04-07 Distributed load balancing for single entry-point systems

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/778,223 Expired - Fee Related US7155515B1 (en) 2001-02-06 2001-02-06 Distributed load balancing for single entry-point systems

Country Status (1)

Country Link
US (2) US7155515B1 (en)

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040103166A1 (en) * 2002-11-27 2004-05-27 International Business Machines Corporation Semi-hierarchical system and method for administration of clusters of computer resources
US20060005231A1 (en) * 2002-02-08 2006-01-05 Nir Zuk Intelligent integrated network security device for high-availability applications
US20060005063A1 (en) * 2004-05-21 2006-01-05 Bea Systems, Inc. Error handling for a service oriented architecture
US7181524B1 (en) * 2003-06-13 2007-02-20 Veritas Operating Corporation Method and apparatus for balancing a load among a plurality of servers in a computer system
US20070185997A1 (en) * 2006-02-09 2007-08-09 International Business Machines Corporation Selecting servers based on load-balancing metric instances
US20070282880A1 (en) * 2006-05-31 2007-12-06 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Partial role or task allocation responsive to data-transformative attributes
US20080016198A1 (en) * 2006-06-12 2008-01-17 Enigmatec Corporation Self-managed distributed mediation networks
US20080043617A1 (en) * 2006-08-21 2008-02-21 Citrix Systems, Inc. Systems and methods for weighted monitoring of network services
US20080049786A1 (en) * 2006-08-22 2008-02-28 Maruthi Ram Systems and Methods for Providing Dynamic Spillover of Virtual Servers Based on Bandwidth
US20080049616A1 (en) * 2006-08-22 2008-02-28 Citrix Systems, Inc. Systems and methods for providing dynamic connection spillover among virtual servers
US20080059560A1 (en) * 2006-08-29 2008-03-06 Samsung Electronics Co., Ltd Service distribution apparatus and method
US20080071891A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Signaling partial service configuration changes in appnets
US20080071888A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Configuring software agent security remotely
US20080072032A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Configuring software agent security remotely
US20080072241A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Evaluation systems and methods for coordinating software agents
US20080072278A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Evaluation systems and methods for coordinating software agents
US20080071871A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Transmitting aggregated information arising from appnet information
US20080127293A1 (en) * 2006-09-19 2008-05-29 Searete LLC, a liability corporation of the State of Delaware Evaluation systems and methods for coordinating software agents
US20090034417A1 (en) * 2007-08-03 2009-02-05 Ravi Kondamuru Systems and Methods for Efficiently Load Balancing Based on Least Connections
US20090106349A1 (en) * 2007-10-19 2009-04-23 James Harris Systems and methods for managing cookies via http content layer
US7653008B2 (en) 2004-05-21 2010-01-26 Bea Systems, Inc. Dynamically configurable service oriented architecture
US7760729B2 (en) 2003-05-28 2010-07-20 Citrix Systems, Inc. Policy based network address translation
US20100299437A1 (en) * 2009-05-22 2010-11-25 Comcast Interactive Media, Llc Web Service System and Method
US20110040892A1 (en) * 2009-08-11 2011-02-17 Fujitsu Limited Load balancing apparatus and load balancing method
US20110060809A1 (en) * 2006-09-19 2011-03-10 Searete Llc Transmitting aggregated information arising from appnet information
US20110255125A1 (en) * 2010-04-15 2011-10-20 Xerox Corporation System and method for burstiness-aware scheduling and capacity assessment on a network of electronic devices
US8090877B2 (en) 2008-01-26 2012-01-03 Citrix Systems, Inc. Systems and methods for fine grain policy driven cookie proxying
US8281036B2 (en) 2006-09-19 2012-10-02 The Invention Science Fund I, Llc Using network access port linkages for data structure update decisions
US20130166762A1 (en) * 2011-12-23 2013-06-27 A10 Networks, Inc. Methods to Manage Services over a Service Gateway
US8601104B2 (en) 2006-09-19 2013-12-03 The Invention Science Fund I, Llc Using network access port linkages for data structure update decisions
US8601530B2 (en) 2006-09-19 2013-12-03 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US20150215388A1 (en) * 2014-01-27 2015-07-30 Google Inc. Anycast based, wide area distributed mapping and load balancing system
US9154584B1 (en) 2012-07-05 2015-10-06 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US9219751B1 (en) 2006-10-17 2015-12-22 A10 Networks, Inc. System and method to apply forwarding policy to an application session
US9253152B1 (en) 2006-10-17 2016-02-02 A10 Networks, Inc. Applying a packet routing policy to an application session
US9270774B2 (en) 2011-10-24 2016-02-23 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
WO2016133965A1 (en) * 2015-02-18 2016-08-25 KEMP Technologies Inc. Methods for intelligent data traffic steering
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9705800B2 (en) 2012-09-25 2017-07-11 A10 Networks, Inc. Load distribution in data networks
US9755897B1 (en) * 2007-10-25 2017-09-05 United Services Automobile Association (Usaa) Enhanced throttle management system
US9806943B2 (en) 2014-04-24 2017-10-31 A10 Networks, Inc. Enabling planned upgrade/downgrade of network devices without impacting network sessions
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US9866487B2 (en) 2014-06-05 2018-01-09 KEMP Technologies Inc. Adaptive load balancer and methods for intelligent data traffic steering
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9917781B2 (en) 2014-06-05 2018-03-13 KEMP Technologies Inc. Methods for intelligent data traffic steering
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US9961135B2 (en) 2010-09-30 2018-05-01 A10 Networks, Inc. System and method to balance servers based on server load status
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10020979B1 (en) 2014-03-25 2018-07-10 A10 Networks, Inc. Allocating resources in multi-core computing environments
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
WO2018182979A1 (en) * 2017-03-30 2018-10-04 Microsoft Technology Licensing, Llc Systems and methods for achieving session stickiness for stateful cloud services with non-sticky load balancers
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
USRE47296E1 (en) 2006-02-21 2019-03-12 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7305450B2 (en) * 2001-03-29 2007-12-04 Nokia Corporation Method and apparatus for clustered SSL accelerator
FI20011651A (en) * 2001-08-15 2003-02-16 Nokia Corp The service cluster load tasapainoittaminen
US7984110B1 (en) * 2001-11-02 2011-07-19 Hewlett-Packard Company Method and system for load balancing
US20030217135A1 (en) * 2002-05-17 2003-11-20 Masayuki Chatani Dynamic player management
US20090118019A1 (en) * 2002-12-10 2009-05-07 Onlive, Inc. System for streaming databases serving real-time applications used through streaming interactive video
US9077991B2 (en) 2002-12-10 2015-07-07 Sony Computer Entertainment America Llc System and method for utilizing forward error correction with video compression
US8526490B2 (en) 2002-12-10 2013-09-03 Ol2, Inc. System and method for video compression using feedback including data related to the successful receipt of video content
US9138644B2 (en) 2002-12-10 2015-09-22 Sony Computer Entertainment America Llc System and method for accelerated machine switching
US9192859B2 (en) 2002-12-10 2015-11-24 Sony Computer Entertainment America Llc System and method for compressing video based on latency measurements and other feedback
US8549574B2 (en) 2002-12-10 2013-10-01 Ol2, Inc. Method of combining linear content and interactive content compressed together as streaming interactive video
US8964830B2 (en) 2002-12-10 2015-02-24 Ol2, Inc. System and method for multi-stream video compression using multiple encoding formats
US9314691B2 (en) 2002-12-10 2016-04-19 Sony Computer Entertainment America Llc System and method for compressing video frames or portions thereof based on feedback information from a client device
US9061207B2 (en) 2002-12-10 2015-06-23 Sony Computer Entertainment America Llc Temporary decoder apparatus and method
US8366552B2 (en) 2002-12-10 2013-02-05 Ol2, Inc. System and method for multi-stream video compression
US8711923B2 (en) 2002-12-10 2014-04-29 Ol2, Inc. System and method for selecting a video encoding format based on feedback data
US10201760B2 (en) 2002-12-10 2019-02-12 Sony Interactive Entertainment America Llc System and method for compressing video based on detected intraframe motion
US9108107B2 (en) 2002-12-10 2015-08-18 Sony Computer Entertainment America Llc Hosting and broadcasting virtual events using streaming interactive video
US9446305B2 (en) 2002-12-10 2016-09-20 Sony Interactive Entertainment America Llc System and method for improving the graphics performance of hosted applications
US8032619B2 (en) * 2003-04-16 2011-10-04 Sony Computer Entertainment America Llc Environment information server
US7409451B1 (en) 2003-05-30 2008-08-05 Aol Llc, A Delaware Limited Liability Company Switching between connectivity types to maintain connectivity
US7603464B2 (en) * 2003-06-04 2009-10-13 Sony Computer Entertainment Inc. Method and system for identifying available resources in a peer-to-peer network
US20050027862A1 (en) * 2003-07-18 2005-02-03 Nguyen Tien Le System and methods of cooperatively load-balancing clustered servers
US8296771B2 (en) * 2003-08-18 2012-10-23 Cray Inc. System and method for mapping between resource consumers and resource providers in a computing system
US20060031431A1 (en) * 2004-05-21 2006-02-09 Bea Systems, Inc. Reliable updating for a service oriented architecture
US7464165B2 (en) * 2004-12-02 2008-12-09 International Business Machines Corporation System and method for allocating resources on a network
US7676587B2 (en) * 2004-12-14 2010-03-09 Emc Corporation Distributed IP trunking and server clustering for sharing of an IP server address among IP servers
US7818401B2 (en) * 2004-12-23 2010-10-19 General Instrument Corporation Method and apparatus for providing decentralized load distribution
US8140678B2 (en) * 2004-12-28 2012-03-20 Sap Ag Failover protection from a failed worker node in a shared memory system
US7715847B2 (en) * 2005-03-09 2010-05-11 Qualcomm Incorporated Use of decremental assignments
US8224985B2 (en) 2005-10-04 2012-07-17 Sony Computer Entertainment Inc. Peer-to-peer communication traversing symmetric network address translators
JP4001896B2 (en) * 2005-11-30 2007-10-31 株式会社コナミデジタルエンタテインメント Game system, server and terminal
US8707323B2 (en) * 2005-12-30 2014-04-22 Sap Ag Load balancing algorithm for servicing client requests
US8141164B2 (en) 2006-08-21 2012-03-20 Citrix Systems, Inc. Systems and methods for dynamic decentralized load balancing across multiple sites
US7995478B2 (en) 2007-05-30 2011-08-09 Sony Computer Entertainment Inc. Network communication with path MTU size discovery
US8171123B2 (en) 2007-12-04 2012-05-01 Sony Computer Entertainment Inc. Network bandwidth detection and distribution
US7856506B2 (en) 2008-03-05 2010-12-21 Sony Computer Entertainment Inc. Traversal of symmetric network address translator for multiple simultaneous connections
US8296417B1 (en) 2008-07-29 2012-10-23 Alexander Gershon Peak traffic management
US9959145B1 (en) 2008-07-29 2018-05-01 Amazon Technologies, Inc. Scalable game space
US8543713B2 (en) * 2008-08-19 2013-09-24 Apple Inc. Computing environment arranged to support predetermined URL patterns
US8060626B2 (en) 2008-09-22 2011-11-15 Sony Computer Entertainment America Llc. Method for host selection based on discovered NAT type
US7984151B1 (en) 2008-10-09 2011-07-19 Google Inc. Determining placement of user data to optimize resource utilization for distributed systems
US8458451B2 (en) * 2009-01-20 2013-06-04 New York University Database outsourcing with access privacy
US20100220622A1 (en) * 2009-02-27 2010-09-02 Yottaa Inc Adaptive network with automatic scaling
US20100223364A1 (en) * 2009-02-27 2010-09-02 Yottaa Inc System and method for network traffic management and load balancing
US20100228819A1 (en) * 2009-03-05 2010-09-09 Yottaa Inc System and method for performance acceleration, data protection, disaster recovery and on-demand scaling of computer applications
CN102859934B (en) * 2009-03-31 2016-05-11 考持·维 Network access management and security protection systems and methods can access computer services
US8332737B2 (en) * 2009-06-26 2012-12-11 Agilent Technologies, Inc. Instrument control system and methods
US8484716B1 (en) 2009-08-07 2013-07-09 Adobe Systems Incorporated Hosting a server application on multiple network tiers
US8761008B2 (en) 2009-10-29 2014-06-24 The Boeing Company System, apparatus, and method for communication in a tactical network
US8243960B2 (en) * 2010-03-04 2012-08-14 Bose Corporation Planar audio amplifier output inductor with current sense
US9168457B2 (en) 2010-09-14 2015-10-27 Sony Computer Entertainment America Llc System and method for retaining system state
US9264396B2 (en) * 2012-06-04 2016-02-16 International Business Machines Corporation Workload balancing between nodes in a cluster as required by allocations of IP addresses within a cluster
US9888068B2 (en) * 2013-04-06 2018-02-06 Citrix Systems, Inc. Systems and methods for maintaining session persistence in a cluster system
US10250677B1 (en) * 2018-05-02 2019-04-02 Cyberark Software Ltd. Decentralized network address control

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6658473B1 (en) * 2000-02-25 2003-12-02 Sun Microsystems, Inc. Method and apparatus for distributing load in a computer environment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5031089A (en) * 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems
US6128279A (en) * 1997-10-06 2000-10-03 Web Balance, Inc. System for balancing loads among network servers
US6438652B1 (en) * 1998-10-09 2002-08-20 International Business Machines Corporation Load balancing cooperating cache servers by shifting forwarded request

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6658473B1 (en) * 2000-02-25 2003-12-02 Sun Microsystems, Inc. Method and apparatus for distributing load in a computer environment

Cited By (119)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8631113B2 (en) 2002-02-08 2014-01-14 Juniper Networks, Inc. Intelligent integrated network security device for high-availability applications
US20060005231A1 (en) * 2002-02-08 2006-01-05 Nir Zuk Intelligent integrated network security device for high-availability applications
US20100242093A1 (en) * 2002-02-08 2010-09-23 Juniper Networks, Inc. Intelligent integrated network security device for high-availability applications
US7734752B2 (en) * 2002-02-08 2010-06-08 Juniper Networks, Inc. Intelligent integrated network security device for high-availability applications
US8326961B2 (en) 2002-02-08 2012-12-04 Juniper Networks, Inc. Intelligent integrated network security device for high-availability applications
US8959197B2 (en) 2002-02-08 2015-02-17 Juniper Networks, Inc. Intelligent integrated network security device for high-availability applications
US7577730B2 (en) * 2002-11-27 2009-08-18 International Business Machines Corporation Semi-hierarchical system and method for administration of clusters of computer resources
US20040103166A1 (en) * 2002-11-27 2004-05-27 International Business Machines Corporation Semi-hierarchical system and method for administration of clusters of computer resources
US8194673B2 (en) 2003-05-28 2012-06-05 Citrix Systems, Inc. Policy based network address translation
US7760729B2 (en) 2003-05-28 2010-07-20 Citrix Systems, Inc. Policy based network address translation
US7181524B1 (en) * 2003-06-13 2007-02-20 Veritas Operating Corporation Method and apparatus for balancing a load among a plurality of servers in a computer system
US20060005063A1 (en) * 2004-05-21 2006-01-05 Bea Systems, Inc. Error handling for a service oriented architecture
US7653008B2 (en) 2004-05-21 2010-01-26 Bea Systems, Inc. Dynamically configurable service oriented architecture
US7779116B2 (en) * 2006-02-09 2010-08-17 International Business Machines Corporation Selecting servers based on load-balancing metric instances
US20070185997A1 (en) * 2006-02-09 2007-08-09 International Business Machines Corporation Selecting servers based on load-balancing metric instances
USRE47296E1 (en) 2006-02-21 2019-03-12 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US20070282880A1 (en) * 2006-05-31 2007-12-06 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Partial role or task allocation responsive to data-transformative attributes
US8266321B2 (en) * 2006-06-12 2012-09-11 Cloudsoft Corporation Limited Self-managed distributed mediation networks
US20120209980A1 (en) * 2006-06-12 2012-08-16 Cloudsoft Corporation Limited Self-managed distributed mediation networks
US8812729B2 (en) * 2006-06-12 2014-08-19 Cloudsoft Corporation Limited Self-managed distributed mediation networks
US20080016198A1 (en) * 2006-06-12 2008-01-17 Enigmatec Corporation Self-managed distributed mediation networks
US8116207B2 (en) * 2006-08-21 2012-02-14 Citrix Systems, Inc. Systems and methods for weighted monitoring of network services
US20080043617A1 (en) * 2006-08-21 2008-02-21 Citrix Systems, Inc. Systems and methods for weighted monitoring of network services
US20080049786A1 (en) * 2006-08-22 2008-02-28 Maruthi Ram Systems and Methods for Providing Dynamic Spillover of Virtual Servers Based on Bandwidth
US8493858B2 (en) * 2006-08-22 2013-07-23 Citrix Systems, Inc Systems and methods for providing dynamic connection spillover among virtual servers
US9185019B2 (en) 2006-08-22 2015-11-10 Citrix Systems, Inc. Systems and methods for providing dynamic connection spillover among virtual servers
US8312120B2 (en) * 2006-08-22 2012-11-13 Citrix Systems, Inc. Systems and methods for providing dynamic spillover of virtual servers based on bandwidth
US20080049616A1 (en) * 2006-08-22 2008-02-28 Citrix Systems, Inc. Systems and methods for providing dynamic connection spillover among virtual servers
US8275871B2 (en) 2006-08-22 2012-09-25 Citrix Systems, Inc. Systems and methods for providing dynamic spillover of virtual servers based on bandwidth
US20080059560A1 (en) * 2006-08-29 2008-03-06 Samsung Electronics Co., Ltd Service distribution apparatus and method
US8359395B2 (en) 2006-08-29 2013-01-22 Samsung Electronics Co., Ltd. Service distribution apparatus and method
US8108532B2 (en) 2006-08-29 2012-01-31 Samsung Electronics Co., Ltd. Service distribution apparatus and method
US20100119064A1 (en) * 2006-08-29 2010-05-13 Samsung Electronics Co., Ltd Service distribution apparatus and method
WO2008026837A1 (en) * 2006-08-29 2008-03-06 Samsung Electronics Co., Ltd. Service distribution apparatus and method
US20080071888A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Configuring software agent security remotely
US9306975B2 (en) 2006-09-19 2016-04-05 The Invention Science Fund I, Llc Transmitting aggregated information arising from appnet information
US8055797B2 (en) 2006-09-19 2011-11-08 The Invention Science Fund I, Llc Transmitting aggregated information arising from appnet information
US8984579B2 (en) 2006-09-19 2015-03-17 The Innovation Science Fund I, LLC Evaluation systems and methods for coordinating software agents
US20110060809A1 (en) * 2006-09-19 2011-03-10 Searete Llc Transmitting aggregated information arising from appnet information
US20110047369A1 (en) * 2006-09-19 2011-02-24 Cohen Alexander J Configuring Software Agent Security Remotely
US9479535B2 (en) 2006-09-19 2016-10-25 Invention Science Fund I, Llc Transmitting aggregated information arising from appnet information
US9680699B2 (en) 2006-09-19 2017-06-13 Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US20080127293A1 (en) * 2006-09-19 2008-05-29 Searete LLC, a liability corporation of the State of Delaware Evaluation systems and methods for coordinating software agents
US8224930B2 (en) 2006-09-19 2012-07-17 The Invention Science Fund I, Llc Signaling partial service configuration changes in appnets
US20080071871A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Transmitting aggregated information arising from appnet information
US20080072278A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Evaluation systems and methods for coordinating software agents
US8601104B2 (en) 2006-09-19 2013-12-03 The Invention Science Fund I, Llc Using network access port linkages for data structure update decisions
US8281036B2 (en) 2006-09-19 2012-10-02 The Invention Science Fund I, Llc Using network access port linkages for data structure update decisions
US20080072241A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Evaluation systems and methods for coordinating software agents
US20080072032A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Configuring software agent security remotely
US7752255B2 (en) 2006-09-19 2010-07-06 The Invention Science Fund I, Inc Configuring software agent security remotely
US9178911B2 (en) 2006-09-19 2015-11-03 Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US20080071889A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Signaling partial service configuration changes in appnets
US20080071891A1 (en) * 2006-09-19 2008-03-20 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Signaling partial service configuration changes in appnets
US8601530B2 (en) 2006-09-19 2013-12-03 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US8607336B2 (en) 2006-09-19 2013-12-10 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US8627402B2 (en) 2006-09-19 2014-01-07 The Invention Science Fund I, Llc Evaluation systems and methods for coordinating software agents
US8055732B2 (en) 2006-09-19 2011-11-08 The Invention Science Fund I, Llc Signaling partial service configuration changes in appnets
US9219751B1 (en) 2006-10-17 2015-12-22 A10 Networks, Inc. System and method to apply forwarding policy to an application session
US9497201B2 (en) 2006-10-17 2016-11-15 A10 Networks, Inc. Applying security policy to an application session
US9253152B1 (en) 2006-10-17 2016-02-02 A10 Networks, Inc. Applying a packet routing policy to an application session
US9270705B1 (en) 2006-10-17 2016-02-23 A10 Networks, Inc. Applying security policy to an application session
US8077622B2 (en) 2007-08-03 2011-12-13 Citrix Systems, Inc. Systems and methods for efficiently load balancing based on least connections
US20090034417A1 (en) * 2007-08-03 2009-02-05 Ravi Kondamuru Systems and Methods for Efficiently Load Balancing Based on Least Connections
US20090106349A1 (en) * 2007-10-19 2009-04-23 James Harris Systems and methods for managing cookies via http content layer
US7925694B2 (en) 2007-10-19 2011-04-12 Citrix Systems, Inc. Systems and methods for managing cookies via HTTP content layer
US9755897B1 (en) * 2007-10-25 2017-09-05 United Services Automobile Association (Usaa) Enhanced throttle management system
US8769660B2 (en) 2008-01-26 2014-07-01 Citrix Systems, Inc. Systems and methods for proxying cookies for SSL VPN clientless sessions
US8090877B2 (en) 2008-01-26 2012-01-03 Citrix Systems, Inc. Systems and methods for fine grain policy driven cookie proxying
US9059966B2 (en) 2008-01-26 2015-06-16 Citrix Systems, Inc. Systems and methods for proxying cookies for SSL VPN clientless sessions
US20100299437A1 (en) * 2009-05-22 2010-11-25 Comcast Interactive Media, Llc Web Service System and Method
US20110040892A1 (en) * 2009-08-11 2011-02-17 Fujitsu Limited Load balancing apparatus and load balancing method
US8892768B2 (en) * 2009-08-11 2014-11-18 Fujitsu Limited Load balancing apparatus and load balancing method
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US20110255125A1 (en) * 2010-04-15 2011-10-20 Xerox Corporation System and method for burstiness-aware scheduling and capacity assessment on a network of electronic devices
US8705090B2 (en) * 2010-04-15 2014-04-22 Xerox Corporation System and method for burstiness-aware scheduling and capacity assessment on a network of electronic devices
US9961135B2 (en) 2010-09-30 2018-05-01 A10 Networks, Inc. System and method to balance servers based on server load status
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9961136B2 (en) 2010-12-02 2018-05-01 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US10178165B2 (en) 2010-12-02 2019-01-08 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9906591B2 (en) 2011-10-24 2018-02-27 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9270774B2 (en) 2011-10-24 2016-02-23 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
US9979801B2 (en) 2011-12-23 2018-05-22 A10 Networks, Inc. Methods to manage services over a service gateway
US20130166762A1 (en) * 2011-12-23 2013-06-27 A10 Networks, Inc. Methods to Manage Services over a Service Gateway
US9094364B2 (en) * 2011-12-23 2015-07-28 A10 Networks, Inc. Methods to manage services over a service gateway
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US9154584B1 (en) 2012-07-05 2015-10-06 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US9602442B2 (en) 2012-07-05 2017-03-21 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US9705800B2 (en) 2012-09-25 2017-07-11 A10 Networks, Inc. Load distribution in data networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9544364B2 (en) 2012-12-06 2017-01-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9979665B2 (en) 2013-01-23 2018-05-22 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
US9467506B2 (en) * 2014-01-27 2016-10-11 Google Inc. Anycast based, wide area distributed mapping and load balancing system
US20150215388A1 (en) * 2014-01-27 2015-07-30 Google Inc. Anycast based, wide area distributed mapping and load balancing system
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US10020979B1 (en) 2014-03-25 2018-07-10 A10 Networks, Inc. Allocating resources in multi-core computing environments
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US10257101B2 (en) 2014-03-31 2019-04-09 A10 Networks, Inc. Active application response delay time
US9806943B2 (en) 2014-04-24 2017-10-31 A10 Networks, Inc. Enabling planned upgrade/downgrade of network devices without impacting network sessions
US10110429B2 (en) 2014-04-24 2018-10-23 A10 Networks, Inc. Enabling planned upgrade/downgrade of network devices without impacting network sessions
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
US9866487B2 (en) 2014-06-05 2018-01-09 KEMP Technologies Inc. Adaptive load balancer and methods for intelligent data traffic steering
US9917781B2 (en) 2014-06-05 2018-03-13 KEMP Technologies Inc. Methods for intelligent data traffic steering
WO2016133965A1 (en) * 2015-02-18 2016-08-25 KEMP Technologies Inc. Methods for intelligent data traffic steering
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies
WO2018182979A1 (en) * 2017-03-30 2018-10-04 Microsoft Technology Licensing, Llc Systems and methods for achieving session stickiness for stateful cloud services with non-sticky load balancers

Also Published As

Publication number Publication date
US7395335B2 (en) 2008-07-01
US7155515B1 (en) 2006-12-26

Similar Documents

Publication Publication Date Title
US5612957A (en) Routing method in scalable distributed computing environment
KR101678711B1 (en) Load balancing across layer-2 domains
US8433819B2 (en) Facilitating download of requested data from server utilizing virtual network connections between client devices
CN101167054B (en) Methods and apparatus for selective workload off-loading across multiple data centers
US9647954B2 (en) Method and system for optimizing a network by independently scaling control segments and data flow
US5938732A (en) Load balancing and failover of network services
US7609619B2 (en) Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution
EP1521409B1 (en) System and method for load balancing and fail over
US6891839B2 (en) Distributing packets among multiple tiers of network appliances
US6772211B2 (en) Content-aware web switch without delayed binding and methods thereof
US7120697B2 (en) Methods, systems and computer program products for port assignments of multiple application instances using the same source IP address
US7500243B2 (en) Load balancing method and system using multiple load balancing servers
US6996631B1 (en) System having a single IP address associated with communication protocol stacks in a cluster of processing systems
US6650641B1 (en) Network address translation using a forwarding agent
US7042870B1 (en) Sending instructions from a service manager to forwarding agents on a need to know basis
EP1506491B1 (en) Dynamic player management
JP4000331B2 (en) System for port mapping of network
EP1388073B1 (en) Optimal route selection in a content delivery network
US9197699B2 (en) Load-balancing cluster
US7693050B2 (en) Stateless, affinity-preserving load balancing
EP1010102B1 (en) Arrangement for load sharing in computer networks
US9602591B2 (en) Managing TCP anycast requests
US20060126619A1 (en) Aggregation over multiple processing nodes of network resources each providing offloaded connections between applications over a network
US6954784B2 (en) Systems, method and computer program products for cluster workload distribution without preconfigured port identification by utilizing a port of multiple ports associated with a single IP address
EP1599793B1 (en) System and method for server load balancing and server affinity

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BROWN, JUSTIN D.;SMITH, JOHN W.;LINK, CRAIG A.;AND OTHERS;REEL/FRAME:018915/0986;SIGNING DATES FROM 20010113 TO 20010205

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034543/0001

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 8