GB2346302A - Load balancing in a computer network - Google Patents

Load balancing in a computer network Download PDF

Info

Publication number
GB2346302A
GB2346302A GB9901848A GB9901848A GB2346302A GB 2346302 A GB2346302 A GB 2346302A GB 9901848 A GB9901848 A GB 9901848A GB 9901848 A GB9901848 A GB 9901848A GB 2346302 A GB2346302 A GB 2346302A
Authority
GB
United Kingdom
Prior art keywords
nodes
network
usage
node
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9901848A
Other versions
GB2346302B (en
GB9901848D0 (en
Inventor
Sohail Syyed
Jane Henderson Shaw
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to GB9901848A priority Critical patent/GB2346302B/en
Publication of GB9901848D0 publication Critical patent/GB9901848D0/en
Publication of GB2346302A publication Critical patent/GB2346302A/en
Application granted granted Critical
Publication of GB2346302B publication Critical patent/GB2346302B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method of workload balancing, for use in a network 100 having a plurality of nodes 102-110 is described. The network has a plurality of possible routes between at least two of the nodes 102, 104. The method comprises monitoring usage of the network by at least each of the at least two of the plurality of nodes. Data is then recorded regarding the usage of the network. Pattern recognition is performed on the recorded data so as to recognise usage patterns. Responsive to such usage patterns, routes are allocated between at least the at least two of the plurality of nodes. For a node with multiple servers, then server pool balancing is provided in acceptance with usage patterns.

Description

PRE-EMPTIVE NETWORK LOAD BALANCING BY PREDICTIVE CONFIGURATION Field of the Invention The present invention relates to load balancing between a plurality of possible routes between end nodes in a network and to load balancing between a plurality of possible servers at a given node in a network.
Background of the Invention Conventionally, in a computer network, there are multiple nodes which are usually connected by multiple paths. There may be multiple possible routes between a pair of nodes using different paths via different intermediate nodes between the two end nodes. There may also be a direct route between a pair of end nodes. Additionally, a single end node may actually be associated with multiple server computers at a node.
In a computer network the network usage rate of the most frequent users of the network and of the machines which most frequently make requests via the network may fluctuate for various reasons including, for example, seasonally, specific hours of the day, or even randomly.
Additionally, servers may be added to a network, taken away, or may fail.
Applications such as program development or other interactive users may at times demand a larger share of the processing power of servers located on the network. These environmental changes may seriously overload certain computers on the network and result in a dramatic performance (response time and/or throughput) degradation. In such a network, it is desirable that processing be performed with maximum efficiency. This requires some sort of load balancing among the computers on the network, including balancing which of the multiple possible routes between a pair of nodes is used and which one of multiple server computers at a node is used to process a request.
Load balancers are known in the art. Static load balancers are manually tuned by a network operator (a human) who observes how work is apportioned among the computers in a network and, based on his observations, tunes the network, including the usage of different network routes, in order to even out the load. The balancing is fixed when the network is in operation and thus the network cannot respond to unusual circumstances or changes in the network's usage patterns. If changes are deemed to be necessary, because of failures or slow response times, operator intervention is required and, in the worst case, the network must be shut down while the operator retunes it.
Static allocation of requests is frequently used for load balancing in computer networks. The characteristics of requests and the computers attached to the network typically constitute static information. As some of the network parameters, for example, the arrival rate of requests, vary gradually with time, such parameters may be estimated and used for improved request routing. However, rerouting is still triggered by human observation and decision.
Dynamic load balancers are also known in that art. Such load balancers are fully dynamic, that is they reassess the need for load balancing after each request. Such systems require such a large amount of overhead that they are not practical. It takes a large amount of processor power to implement dynamic load balancing. This large amount of processor power would otherwise be devoted to processing the actual requests.
So it would be desirable to provide a method of improving the performance of the routing of requests on a network and the performance of selecting which one of a number of servers at a node is used to service a request received at that node.
Disclosure of the Invention Accordingly, the present invention provides a method of workload balancing, for use in a computer network having a plurality of nodes, the network having a plurality of possible routes between at least two of the plurality of nodes, the method comprising the steps of: monitoring usage of the network by at least one of the at least two of the plurality of nodes; recording data regarding the said usage of the network; performing pattern recognition on the recorded data so as to recognise usage patterns; and responsive to such usage patterns, allocating workload to routes between at least the at least two of the plurality of nodes. By allocating the workload between the possible routes in response to the anticipated usage as established by the pattern recognition, improved performance of the usage of the network may be achieved. The route to be used by traffic from a node is predetermined, but this predetermination has been optimised so as to provide the best performance based on past recorded data.
In a preferred embodiment, the recording step records data regarding the originating node, destination node and the size of traffic between the originating node and destination node. By recording the size of traffic between originating and destination nodes, the routes may be allocated taking this into account.
Preferably, the allocating step allocates traffic from nodes having the largest amount of traffic to different ones of the plurality of possible routes. Allocating traffic from nodes having the largest amount of traffic to different ones of the plurality of possible routes means that traffic from nodes having the largest usage of the network is sent on different routes and so spreads the workload.
More preferably, the allocating step allocates traffic from nodes having only light usage to a particular one or to particular ones of the plurality of possible routes. Allocating light users to a subset of the possible routes between nodes ensures that traffic from light users of the network follows a fast, efficient path.
The invention also provides a method of workload balancing, for use in a computer network having a plurality of nodes, at least one of the nodes having a plurality of servers associated therewith, the method comprising the steps of: monitoring usage of the nodes having a plurality of servers associated therewith; recording data regarding said usage of the network; performing pattern recognition on the recorded data so as to recognise usage patterns; and responsive to such usage patterns, allocating workload between each of the plurality of servers at said node.
Preferably, the allocating step allocates workload from nodes having the largest amount of traffic to different ones of said plurality of servers associated with a node. Allocating workload from nodes having the largest amount of traffic to different ones of the plurality of servers means that traffic from nodes having the largest usage of the node is allocated to different servers and so spreads the workload between the servers.
Preferably, the allocating step allocates traffic from nodes having a small amount of workload to a particular one or to particular ones of said plurality of servers associated with a node. Allocating light users to a subset of the possible servers ensures that workload from light users of the network is processed fast and efficiently.
Brief Description of the Drawings Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figure 1 is a schematic diagram of a computer network having five nodes and the interconnections therebetween; Figure 2 is a detailed view of one of the nodes of the network of figure 1, the node having multiple servers associated therewith; Figure 3 is a flowchart showing the steps performed during the monitoring phase of the present invention ; Figure 4 is a flowchart showing the steps performed during the pattern recognition phase of the present invention; and Figure 5 is a flowchart showing the steps performed during the allocating phase of the present invention.
Detailed Description of the Invention Referring firstly to figure 1, connections between five separate nodes 102,104,106,108,110 of a network 100 are shown schematically.
If it is desired to transmit a request or data from node 102 to node 104, then there are three possible routes that the request may take. It may be sent from node 102 to node 104 directly, it may be sent from node 102 to node 104 through node 106 or it may be sent from node 102 to node 104 through nodes 106,108 and 110.
Figure 2 shows a detailed view of node 104 of the network of figure 1. Node 104 has multiple servers 202,204,206,208 associated with it.
To each of the other nodes 102,106,108,110 on the network 100, node 104 appears as a single server. Requests addressed to node 104 are shared between the multiple servers 202,204,206,208. Such balancing between servers associated with a single node is called server pool balancing. Clustered servers are commonly used, where the actual server supplying the service to the requester is not necessarily defined to the end user. Another example of a server pool is a typical implementation of an Internet World Wide Web (WWW) site or an Internet File Transfer Protocol (FTP) site, where multiple servers are associated with a single hostname.
Figure 3 shows a flowchart of the steps followed in monitoring machine usage of the network 100. The process starts at step 302. At step 304, a time period for the duration of monitoring is set. The time period may be defined in terms of hours, days, weeks or months. At step 306, the granularity of the collection of the information is set. That period may be defined as hourly periods in each day, or usage daily in each week, each month or other similar periods. At step 308, the usage by each node of the network is monitored for each period of time set by the granularity period. If the period is defined as hourly periods in each day for seven days, then the usage by each node is recorded as the number of times each route is used on the network for different types and sizes of traffic in each hour of the day. During this time period all usage of the network is monitored, with relevant information being recorded. Typically, the start node, the end node and the size and type of request or data will be recorded. The monitoring is continued until the time period for the duration of monitoring has been reached. Once the end of the time period has been reached, then at step 310 the data is saved so that it can be processed and any patterns in the data recognised. The process ends at step 312.
Referring to figure 4, the pattern recognition process starts at step 402. At step 404, pattern recognition is performed on the stored network usage information. This usage pattern recognition is performed for each user and/or machine. The pattern recognition process ends at step 406.
The usage pattern is used to recognise patterns in the way in which particular users or machines use the network at any given time of day, week or month. For example, Internet web page accesses may peek around the lunchtime period for many employees. Since the usage of each user/machine has now been determined, it is possible to predict usage patterns for a given time period. Once usage patterns have been predicted, then it is possible to distribute traffic across multiple network routes in an optimal fashion.
Referring to figure 5, the process of distributing traffic across multiple network routes and multiple machines starts at step 502. At step 504, once usage patterns of each user/machine have been determined, the nodes between which the most traffic is generated can then have that traffic distributed across the multiple network routes available between those nodes. Additionally, the nodes which generate the most requests for a node having multiple servers can have their requests distributed across the multiple servers associated with those nodes. In this way, the load across the network is more evenly balanced. The process ends at step 506.
As an example, if a user at a node 102, every weekday, at 12: 00pm (or thereabouts) downloads a large amount of data, such as web pages, news and the like, and a user 108, every weekday, at 11 : 00am, starts downloading large images that take 2 hours to complete, then this usage pattern can be detected. The two users can be placed on separate network gateways so as to distribute the load across the network.
In addition to this pre-emptive load balancing, account may be taken of priority users who need a clear network route. This may be achieved by allocating traffic from only light users on certain network routes or machines. The usage distribution could also be used to provide light users with a fast and quick efficient path.
In a particular embodiment of the present invention, Dynamic Host Configuration Protocol (DHCP) Internet Protocol (IP) addresses are allocated. The DHCP protocol has the effect that a DHCP server defines network gateways and subnet masks to DHCP clients. Allocation of DHCP IP addresses may be done on the basis of distributing heavy users and light users on different gateways, or alternatively distributing users on any policy basis that has been decided in order to provide optimum performance.
In another embodiment of the present invention, the routing of packets at gateways where more than one physical route exists between the gateways may be controlled according to the present invention. Heavy users of the network may be routed through a short route, whilst light users of the network may be routed through routes with less throughput available.
In another embodiment of the invention, groups of users may be identified from all of the users, so that requests from each of the groups are allocated to separate machines in order to optimise machine loading.

Claims (12)

  1. CLAIMS 1. A method of workload balancing, for use in a computer network having a plurality of nodes, the network having a plurality of possible routes between at least two of the plurality of nodes, the method comprising the steps of: monitoring usage of the network by at least one of the at least two of the plurality of nodes; recording data regarding said usage of the network; performing pattern recognition on the recorded data so as to recognise usage patterns; and responsive to such usage patterns, allocating routes between at least said at least two of the plurality of nodes.
  2. 2. A method as claimed in claim 1, wherein said monitoring step monitors usage of the network for a fixed period of time.
  3. 3. A method as claimed in claim 1, wherein said monitoring step monitors usage of the network with a defined granularity period.
  4. 4. A method as claimed in claim 1, wherein said recording step records data regarding the originating node, destination node and the size of traffic between the originating node and destination node.
  5. 5. A method as claimed in claim 1, wherein said allocating step allocates traffic from nodes having the largest amount of traffic to different ones of said plurality of possible routes.
  6. 6. A method as claimed in claim 1, wherein said allocating step allocates traffic from nodes having only light usage to a particular one or to particular ones of said plurality of possible routes.
  7. 7. A method as claimed in claim 1, wherein a plurality of the nodes in the network are gateways and said allocating step allocates packets of data to routes between the gateways,
  8. 8. A method of workload balancing, for use in a computer network having a plurality of nodes, at least one of the nodes having a plurality of servers associated therewith, the method comprising the steps of: monitoring usage of the nodes having a plurality of servers associated therewith; recording data regarding said usage of the network; performing pattern recognition on the recorded data so as to recognise usage patterns; and responsive to such usage patterns, allocating workload between each of the plurality of servers at said node.
  9. 9. A method as claimed in claim 8, wherein said monitoring step monitors usage of the node for a fixed period of time.
  10. 10. A method as claimed in claim 8, wherein said monitoring step monitors usage of the node with a defined granularity period.
  11. 11. A method as claimed in claim 8, wherein said allocating step allocates workload from nodes having the largest amount of traffic to different ones of said plurality of servers associated with a node.
  12. 12. A method as claimed in claim 8, wherein said allocating step allocates traffic from nodes having a small amount of workload to a particular one or to particular ones of said plurality of servers associated with a node.
GB9901848A 1999-01-29 1999-01-29 Pre-emptive network load balancing by predictive configuration Expired - Fee Related GB2346302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9901848A GB2346302B (en) 1999-01-29 1999-01-29 Pre-emptive network load balancing by predictive configuration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9901848A GB2346302B (en) 1999-01-29 1999-01-29 Pre-emptive network load balancing by predictive configuration

Publications (3)

Publication Number Publication Date
GB9901848D0 GB9901848D0 (en) 1999-03-17
GB2346302A true GB2346302A (en) 2000-08-02
GB2346302B GB2346302B (en) 2003-06-18

Family

ID=10846629

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9901848A Expired - Fee Related GB2346302B (en) 1999-01-29 1999-01-29 Pre-emptive network load balancing by predictive configuration

Country Status (1)

Country Link
GB (1) GB2346302B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1300992A1 (en) * 2001-10-08 2003-04-09 Alcatel Method for distributing load over multiple shared resources in a communication network and network applying such a method
EP1318460A2 (en) * 2001-12-10 2003-06-11 Nec Corporation Node-to-node data transfer method and apparatus
WO2004032429A1 (en) * 2002-10-01 2004-04-15 Telefonaktiebolaget Lm Ericsson (Publ) Access link bandwidth management scheme
FR2846762A1 (en) * 2002-11-06 2004-05-07 France Telecom Data-processing entities traffic volume consumption regulating process, involves comparing value function of decounting value with date of current evaluation having set of threshold values of traffic volume
WO2010149832A1 (en) * 2009-06-26 2010-12-29 Nokia Corporation Multi-path transport

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1986002511A1 (en) * 1984-10-18 1986-04-24 Hughes Aircraft Company Load balancing for packet switching nodes
US4967345A (en) * 1988-06-23 1990-10-30 International Business Machines Corporation Method of selecting least weight routes in a communications network
GB2281793A (en) * 1993-09-11 1995-03-15 Ibm A data processing system for providing user load levelling in a network
US5459837A (en) * 1993-04-21 1995-10-17 Digital Equipment Corporation System to facilitate efficient utilization of network resources in a computer network
EP0694837A1 (en) * 1994-07-25 1996-01-31 International Business Machines Corporation Dynamic workload balancing
US5493689A (en) * 1993-03-01 1996-02-20 International Business Machines Corporation System for configuring an event driven interface including control blocks defining good loop locations in a memory which represent detection of a characteristic pattern
GB2305747A (en) * 1995-09-30 1997-04-16 Ibm Load balancing of connections to parallel servers
EP0782072A1 (en) * 1995-12-26 1997-07-02 Mitsubishi Denki Kabushiki Kaisha File server load distribution system and method
GB2309558A (en) * 1996-01-26 1997-07-30 Ibm Load balancing across the processors of a server computer
EP0817020A2 (en) * 1996-07-01 1998-01-07 Sun Microsystems, Inc. A name service for a redundant array of internet servers
GB2323256A (en) * 1997-03-14 1998-09-16 3Com Technologies Ltd Load balancing in a communication network
EP0892531A2 (en) * 1997-06-19 1999-01-20 Sun Microsystems Inc. Network load balancing for multi-computer server

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1986002511A1 (en) * 1984-10-18 1986-04-24 Hughes Aircraft Company Load balancing for packet switching nodes
US4967345A (en) * 1988-06-23 1990-10-30 International Business Machines Corporation Method of selecting least weight routes in a communications network
US5493689A (en) * 1993-03-01 1996-02-20 International Business Machines Corporation System for configuring an event driven interface including control blocks defining good loop locations in a memory which represent detection of a characteristic pattern
US5459837A (en) * 1993-04-21 1995-10-17 Digital Equipment Corporation System to facilitate efficient utilization of network resources in a computer network
EP0648038A2 (en) * 1993-09-11 1995-04-12 International Business Machines Corporation A data processing system for providing user load levelling in a network
GB2281793A (en) * 1993-09-11 1995-03-15 Ibm A data processing system for providing user load levelling in a network
EP0694837A1 (en) * 1994-07-25 1996-01-31 International Business Machines Corporation Dynamic workload balancing
GB2305747A (en) * 1995-09-30 1997-04-16 Ibm Load balancing of connections to parallel servers
US5740371A (en) * 1995-09-30 1998-04-14 International Business Machines Corporation Load balancing of connections to parallel servers
EP0782072A1 (en) * 1995-12-26 1997-07-02 Mitsubishi Denki Kabushiki Kaisha File server load distribution system and method
GB2309558A (en) * 1996-01-26 1997-07-30 Ibm Load balancing across the processors of a server computer
WO1997029423A1 (en) * 1996-01-26 1997-08-14 International Business Machines Corporation Load balancing across the processors of a server computer
EP0817020A2 (en) * 1996-07-01 1998-01-07 Sun Microsystems, Inc. A name service for a redundant array of internet servers
GB2323256A (en) * 1997-03-14 1998-09-16 3Com Technologies Ltd Load balancing in a communication network
EP0892531A2 (en) * 1997-06-19 1999-01-20 Sun Microsystems Inc. Network load balancing for multi-computer server

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1300992A1 (en) * 2001-10-08 2003-04-09 Alcatel Method for distributing load over multiple shared resources in a communication network and network applying such a method
US8675655B2 (en) 2001-10-08 2014-03-18 Alcatel Lucent Method for distributing load over multiple shared resources in a communication network and network applying such a method
EP1318460A2 (en) * 2001-12-10 2003-06-11 Nec Corporation Node-to-node data transfer method and apparatus
EP1318460A3 (en) * 2001-12-10 2003-07-30 Nec Corporation Node-to-node data transfer method and apparatus
WO2004032429A1 (en) * 2002-10-01 2004-04-15 Telefonaktiebolaget Lm Ericsson (Publ) Access link bandwidth management scheme
FR2846762A1 (en) * 2002-11-06 2004-05-07 France Telecom Data-processing entities traffic volume consumption regulating process, involves comparing value function of decounting value with date of current evaluation having set of threshold values of traffic volume
WO2004045158A1 (en) * 2002-11-06 2004-05-27 France Telecom Method and system for regulating volume consumption of traffic of computer entities having access to shared resources
WO2010149832A1 (en) * 2009-06-26 2010-12-29 Nokia Corporation Multi-path transport
US8265086B2 (en) 2009-06-26 2012-09-11 Nokia Corporation Multi-path transport

Also Published As

Publication number Publication date
GB2346302B (en) 2003-06-18
GB9901848D0 (en) 1999-03-17

Similar Documents

Publication Publication Date Title
Cardellini et al. Redirection algorithms for load sharing in distributed Web-server systems
EP1417817B1 (en) Load balancing for a server farm
US6728748B1 (en) Method and apparatus for policy based class of service and adaptive service level management within the context of an internet and intranet
KR100255626B1 (en) Recoverable virtual encapsulated cluster
US7069324B1 (en) Methods and apparatus slow-starting a web cache system
US6360256B1 (en) Name service for a redundant array of internet servers
Hunt et al. Network dispatcher: A connection router for scalable internet services
US7346686B2 (en) Load balancing using distributed forwarding agents with application based feedback for different virtual machines
US6888836B1 (en) Method for allocating web sites on a web hosting cluster
US6735169B1 (en) Cascading multiple services on a forwarding agent
US8578053B2 (en) NAS load balancing system
Cherkasova FLEX: Load balancing and management strategy for scalable web hosting service
CN110401657B (en) Processing method and device for access log
US7117242B2 (en) System and method for workload-aware request distribution in cluster-based network servers
US7426560B2 (en) Method and system for managing quality of service in a network
US20030163734A1 (en) Methods for managing and dynamically configuring resources at data center
KR20050043616A (en) Load balancing of servers in a cluster
JPH06350648A (en) Method and equipment for storing data packet
JP2005182641A (en) Dynamic load distribution system and dynamic load distribution method
Ider et al. An enhanced AHP–TOPSIS-based load balancing algorithm for switch migration in software-defined networks
Xu et al. A modified round-robin load-balancing algorithm for cluster-based web servers
Vashistha et al. Comparative study of load balancing algorithms
Goldszmidt et al. ShockAbsorber: A TCP Connection Router
GB2346302A (en) Load balancing in a computer network
CN1625109A (en) Method and apparatus for virtualizing network resources

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20050129