US6571283B1 - Method for server farm configuration optimization - Google Patents

Method for server farm configuration optimization Download PDF

Info

Publication number
US6571283B1
US6571283B1 US09474706 US47470699A US6571283B1 US 6571283 B1 US6571283 B1 US 6571283B1 US 09474706 US09474706 US 09474706 US 47470699 A US47470699 A US 47470699A US 6571283 B1 US6571283 B1 US 6571283B1
Authority
US
Grant status
Grant
Patent type
Prior art keywords
server farm
optimization
method
step
selecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US09474706
Inventor
Lev Smorodinsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisys Corp
Original Assignee
Unisys Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/008Reliability or availability analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L29/00Arrangements, apparatus, circuits or systems, not covered by a single one of groups H04L1/00 - H04L27/00 contains provisionally no documents
    • H04L29/02Communication control; Communication processing contains provisionally no documents
    • H04L29/06Communication control; Communication processing contains provisionally no documents characterised by a protocol
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L29/00Arrangements, apparatus, circuits or systems, not covered by a single one of groups H04L1/00 - H04L27/00 contains provisionally no documents
    • H04L29/02Communication control; Communication processing contains provisionally no documents
    • H04L29/06Communication control; Communication processing contains provisionally no documents characterised by a protocol
    • H04L29/0602Protocols characterised by their application
    • H04L29/06047Protocols for client-server architecture
    • H04L2029/06054Access to distributed or replicated servers, e.g. using brokers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing

Abstract

A method and an estimator program for estimating the optimum Server Farm size and the availability of the Server Farm for a given Redundancy Factor and a given particular number of clients.

Description

CROSS-REFERENCES TO RELATED APPLICATIONS

This application is related to U.S. Pat. No. 6,344,196 entitled “Estimator Program for Estimating the Availability of an Application Program That Runs in a Cluster of at Least Two Computers” and U.S. Ser. No. 09/443,926, Nov. 19, 1999, entitled “Method for Estimating the Availability of an Operating Server Farm” now Allowed, which applications are incorporated herein by reference.

FIELD OF THE INVENTION

This invention relates to data processing systems of the type which include a Server Farm that executes application programs for multiple clients (users); and more particularly, this invention relates to methods for optimizing the “Server Farm size” by balancing Server Farm performance and availability requirements in the above type of data processing systems.

BACKGROUND OF THE INVENTION

The referenced U.S. Pat. No. 6,334,195 entitled “Estimator Program for Estimating the Availability of an Application Program That Runs in a Cluster of at Least Two Computers” referenced above involves an estimator program to perform method steps for estimating the availability of an application program that rusn on any “server” in a cluster of at least two servers. By “availability of an application program” is meant the probability that at any particular tie instance, at least one of the servers in a cluster (farm) will actually be servicing requests from external workstations able to use the application program.

In one embodiment, the so-called estimator program begins by receiving input parameters which include (i) multiple downtime periods for each computer in the cluster (farm) that occur at respective frequencies due to various downtime sources, and (ii) an application “failover” time period for switching the running of the application program from any one computer to another computer which is operable. From these input parameters, the described estimator program estimates first and second annual stoppage times, then determines the availability of the application program on the cluster of computers which is derived from the sum of the first and second annual stoppage times.

Thus, as discussed, the estimator program of the previously-described invention estimated a first annual stoppage time for the application program due solely to the concurrent stoppage of all of the computers, as a function of the ratio of a single computer virtual downtime period over the single computer virtual time between stops. Then subsequently, the estimator program was used to estimate a second annual stoppage time for the application program, due solely to the switching for running the application program from one computer to another computer as a function of the single virtual stoppage rate and the application failover time period. From this, the estimator program determined the availability of the application program on the cluster of computers by deriving the sum of the first and second annual stoppage times.

The estimator program method was based on the assumption that “application availability” was to be determined from four factors which were:

(i) single-server hardware reliability;

(ii) maintenance, support, and service strategies;

(iii) user application and environment;

(iv) failover or system reconnection mechanism and application recovery mechanism.

The prior estimation parameters which were described in the co-pending application U.S. Ser. No. 08/550,603 did not take into consideration the total number of operating Server Farm clients and the normal single server workload of users involved with each single server. Further, this earlier application did not provide a recommendation or estimate regarding the number of servers required in the Server Farm (or cluster) which would meet the customers' performance and redundancy level requirements, nor did it establish an optimum farm configuration.

The method of the co-pending application U.S. Ser. No. 09/433,926, filed Nov. 19, 1999, now Allowed, entitled “Method for Estimating the Availability of an Operating Server Farm” extended the area of the original method application for Server Farms designed to serve user communities with a required particular number of customers “n”. This method involving the Server Farm size and availability calculations is based on (1) the single server parameters such as (a) the meantime to failure (MTTF), (b) the meantime to repair (MTTR), and (c) the single server application performance benchmarks, and (2) individual customer preferential requirements, involving (a) the total number of Server Farm application users and (b) a desirable redundancy level.

This estimation method for availability uses the following definition of Server Farm availability. This definition is the probability that a Server Farm provides access to applications and data for a particular minimum number of users. As soon as the Server Farm can not serve this particular minimum number of users, it is considered failed. When some of the users have lost connections but can reconnect to other servers and continue to work and the majority of users do not experience any interruptions in their work, the farm is not considered failed, if it can still serve this particular number of users.

A widely used approach to improve a system's availability beyond the availability of a single system is by using Server Farms with redundant servers. In this case, if one of the farm's servers fails, the “unlucky” users connected to this server will lose their connections, but will have an opportunity to reconnect to other servers in the farm and get access to their applications and data. If all of the “unlucky” users get access to their applications and data, the farm is considered “available.” If at least one of the “unlucky” users fails to get access to his/her applications and data, it means that the Server Farm's redundancy was exhausted and the Server Farm is considered failed.

The parameters for MTTF and MTTR can be estimated as indicated in the cited prior U.S. Pat. No. 6,334,196 as single computer virtual time between failures and a single computer virtual downtime period respectively, fro a particular application and user environment.

Therefore, the availability estimation method of the prior application U.S. Ser. No. 09/443,926 allows one to estimate such parameters of the Server Farm as number of servers, Server Farm availability, and Server Farm downtime, based on a set of input data. At the same time, however, this method does not provide any recommendations about optimum combinations of the Server Farm parameters that can be chosen at the Server Farm planning or design stage.

The presently described new method involving the Server Farm size optimization is based on the input data that include single server parameters similar to the prior application U.S. Ser. No. 09/443,926 and at least two new extra parameters: single server cost and the downtime cost. Additionally, this new method includes newly added steps of selecting an optimization parameter, selecting an optimization criterion, and using an optimization technique procedure to find the optimum value of the optimization parameter.

While the present invention may be shown in a preferential embodiment for a Server Farm that uses any workload balancing mechanism, it is not limited thereto, and can be used for any other data processing environment where the definition of the “Server Farm availability” can be applied.

Thus the object of the present invention is to provide a method for optimizing the “Server Farm size” by balancing Server, Farm performance and availability requirements. The method will generate an optimum recommendation for the selected set of input data, the selected optimization criterion and optimization parameter.

SUMMARY OF THE INVENTION

In accordance with the present invention, a novel estimator program performs method steps for the Server Farm optimization for a given particular number of clients “n” by balancing Server Farm performance and availability requirements. By the optimization of the Server Farm is herein meant the process of finding the optimum value of the selected optimization parameter that delivers the optimum value (maximum or minimum) for the selected optimization criterion and a given set of input data.

The method of optimization is based on a relationship between two major system attributes, performance and availability, that are “competing” for the same system redundant resources. The purpose of the Server Farm optimization is balancing of the business performance and availability requirements.

System performance in a Server Farm computing environment is a particular number of concurrent users with the minimum required application response time and reliable access to their applications and data. Server Farm availability is the probability that a Server Farm provides a required system performance level. A Server Farm parameter that indirectly defines the Server Farm availability and performance is a Redundancy Factor, that is a measure of the available system resources. It is a difference between maximum and nominal performance as a percentage of the maximum performance.

In one particular embodiment, the method uses a simplified Server Farm availability economic model. The model uses optimization criterion that is a total of the initial investment into “highly available” Server Farm and downtime losses during the period of owning a Server Farm. The Redundancy Factor is used as an optimization parameter. Different values of the Redundancy Factor can result in different Server Farm sizes. The greater values of the Redundancy, Factor mean that more system resources are used to increase Server Farm availability and usually more redundant servers are required to provide the same required Server Farm performance.

The method uses the fact that the decrease of the downtime losses do not always justify additional investments in redundant servers. First additions of the redundant servers usually deliver better Server Farm availability or less Server Farm downtime. At some particular Redundancy Factor value and/or Server Farm size, the Server Farm availability is close to the maximum possible value. In this case, the addition of the redundant servers will not decrease Server Farm downtime enough for the additionally expanded Server Farm cost justification. This Redundancy Factor value or the Server Farm size value is the optimum value that minimizes the total Server Farm owner losses that include the initial investment plus estimated downtime losses.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a drawing showing three different Server Farms with different performance and availability combinations;

FIG. 2 is a drawing of a data processing system which includes an application Server Farm that executes application programs for multiple clients (users). This is an example of the typical system where the optimization method of balancing Server Farm performance and availability requirements can be applied;

FIG. 3 shows an example of the optimization criterion as a function of the optimization parameter. Here, the optimization criterion is a total of the initial investment into highly available Server Farm and downtime losses during the period of owning a Server Farm;

FIG. 4 shows an estimator program, which performs method steps for estimating the optimum value of one of the possible optimization parameters of an operating Server Farm designed to serve a particular number of clients “n”.

GLOSSARY LIST OF RELEVANT ITEMS

1. AVAILABILITY: This is a measure of the readiness of the system and an application to deliver an expected service to the user with a required performance level. It may be described as a percentage of time that a system and an application are running as.distinguished from the system being down for maintenance or repairs.

2. MEAN TIME TO FAILURE (MTTF): This is the average operating time between two failures, that can be estimated as the total operating time divided by the number of failures.

3. MEAN TIME TO REPAIR (MTTR): This is the average “downtime” in case of failure, that can be estimated as the total downtime divided by the number of failures.

4. DOWNTIME: The downtime or repair time for a single application server is the time interval required to restore the server and system back to normal business operation. At the end of the repair period the applications running on the repaired server are available to users. The downtime for a Server Farm is the time interval required to restore the nominal Server Farm performance.

5. FAILOVER: This is a mode of operation in the system which has two or more servers or computers wherein a failure in one of the servers or computers will result in transfer of operations to the other or another one of the still operating servers and computers. Failover time is the period of time required for successful transfer from a failed server to an operative server.

6. ESTIMATOR PROGRAM: This is a program which performs method steps for estimating system parameters such as the availability of an application program to run on any computer or server in the cluster of at least two servers or computers. This type of estimator program was the subject of a co-pending application U.S. Ser. No. 550,603 which is incorporated herein by reference. Another estimator program is the subject of this patent application.

7. SERVER FARM: This designates a group of identical individual servers wherein each server can provide service to many single individual clients. The Server Farm can run enterprise class client/server applications (SAP, PeopleSoft, Microsoft SQL) or applications that are traditionally run on a single workstation (Microsoft Office 97). The Server Farm usually uses a work-load balancing mechanism that distributes requests for services or applications to the available servers.

8. REDUNDANCY FACTOR (Rf): This is a measure of the additional number of users that can be added to the nominal number of users per server without exceeding the maximum number of users per server (server performance benchmark maximum of users). It is a difference between maximum and nominal performance as a percentage of the maximum performance. The Redundancy Factor can be calculated as 100 percent minus a usage factor Uf.

9. SERVER FARM AVAILABILITY CALCULATOR: This is an estimator program which estimates the availability for the Server Farm.

10. THIN CLIENT SERVER FARM AVAILABILITY CALCULATOR: This is one of the examples of the SERVER FARM AVAILABILITY CALCULATOR. Because thin-client configurations are intended to make applications available to multiple users at the same time, this calculator calculates the availability of a specified number of instances of an application (not just a single instance) where each application instance is being run at the server, but all the user input response is taking place at the client terminal. In this scenario, downtime occurs whenever the number of available instances of the application drops below the required specified number of instances.

11. USAGE FACTOR (Uf): This is the ratio of the nominal number of users per server to the maximum number of users per server (server performance benchmark maximum of users) times 100 percent.

12. OPTIMIZATION CRITERION: This is a function that determines the value of one of the essential system attributes and must be minimized (or maximized) by variation of one or more system parameters that are chosen as OPTIMIZATION PARAMETERS. Each optimization parameter should have a predefined domain that defines the values that the optimization parameter may assume. The OPTIMIZATION CRITERION is a focus of an optimum system design or configuration. The examples of the optimization criteria are system performance, system availability, and cost of ownership.

DESCRIPTION OF PREFERRED EMBODIMENT

FIG. 1 shows analytical data graphs for 3 different Server Farms with different performance and availability combinations. All of the illustrated Server Farms comprise the same type usage of single servers with a maximum number of concurrent users per server at 100 users, a Mean Time To Failure=1,400 hours, and a Mean Time To Repair=6 hours. Server Farm 1 contains 5 servers and with Redundancy Factor=20% can support 400 concurrent users even in case of one server failure. Server Farm 2 and Server Farm 3 demonstrate two different possibilities if one additional server is added to the Server Farm 1.

Server Farm 2 uses all additional resources to improve Server Farm availability. All new available resources are restricted only for the users of the failed servers. Nominal Server Farm performance in Farm 2 is not changed (400 concurrent users) but the Redundancy Factor is increased from 20% to 33.3%. This results in the Server Farm availability increase from 99.965% to 99.999% and, respectively, in the Server Farm downtime decrease about 3 hours (181 min) per year. That is Server Farm 2 has 5 minutes/year downtime, while Farm 1 had 186 minutes/year downtime.

Server Farm 3 uses all additional resources to improve Server Farm performance. All new available resources are dedicated to new users. Nominal Server Farm performance is changed from the 400 concurrent users of Farms 1 and 2 to 500 users in Farm 3. Therefore, the Redundancy Factor is decreased from 20% (Farm 1) to 16.7%. (in Farm 3). This results in the slight Server Farm availability decrease from 99.965% (Farm 1) to 99.947% (Farm 3) and, respectively, in the Server Farm downtime increase about 1.5 hours (91 min) per year, i.e., 186 minutes downtime in Farm 1 and 277 minutes/year in Farm 3.

It can be noted that the Redundancy Factor is a key parameter in a Server Farm since it weighs heavily into Server Farm performance and Server Farm size. And as will be indicated later, the Redundancy Factor can be one of the optimization parameters for the optimum Server Farm configuration which minimizes the total investment and losses for the Server Farm that properly handles the required number of customers to be served.

In terms of Server Farm configuration cost, Table 1 demonstrates how the configuration differences affect the ownership cost of farm usage.

In Server Farm 1, assuming the cost of $400 per user per server, a downtime cost of $25,000 per hour per year, the calculation for ownership cost over 5 years is as follows:

5 SERVERS×$400 PER USER/SERVER×100 USERS/SERVER=$200,000 3.1 HOURS DOWNTIME×$25,000/HOUR . . . =$77,461/YEAR

TOTAL LOSSES+INVESTMENT FOR 5 YEARS FOR SERVER FARM 1 IS . . . $587,306

Now, regarding Server Farm 2:

6 SERVERS×$400 PER USER/SERVER×100 USERS/SERVER=$240,000 0.08 HOURS DOWNTIME×$25,000/HOUR=$1,981/YEAR

TOTAL LOSSES+INVESTMENT FOR 5 YEARS FOR SERVER FARM 2 IS $249,905

TABLE 1
Server Farm Cost Analysis
Server Farm Server Farm Server Farm
1 2 3
Required number of 400 400 500
concurrent users
Server MTTR, hours 6 6 6
Server MTTF, hours 1,400 1,400 1,400
Maximum number of 100 100 100
concurrent users per
server
Redundancy factor 20.0% 33.3% 16.7%
Normal workload 80 66 83
number of users
Estimated number of 5 6 6
servers
Estimated peak 500 600 600
number of users
Estimated number of 1 2 1
redundant servers
Estimated Server Farm 99.96463% 99.99910% 99.94738%
availability (%)
Estimated Server Farm 3.10 0.08 4.61
downtime, hour/year
Cost per user per $400 $400 $400
server, $/user per
server
Farm cost, $ $200,000 $240,000 $240,000
Downtime cost per $25,000 $25,000 $25,000
hour, $/hour
Downtime cost per $77,461 $1,981 $115,241
year, $/year
System life period, 5 5 5
years
Downtime cost for $387,306 $9,905 $576,203
system life period
Total losses + $587,306 $249,905 $816,203
investment

For Server Farm 3 of FIG. 1 which must handle 500 users, the five-year ownership cost with 6 servers is as follows:

6 SERVERS×$400 PER USER/SERVER×100 USERS/SERVER $240,000 4.61 HOURS DOWNTIME×$25,000/HOUR=$115,241/YEAR

TOTAL LOSSES+INVESTMENT FOR 5 YEARS FOR SERVER FARM 2 IS $816,203

Thus, based on the criterion “TOTAL LOSSES+INVESTMENT FOR 5 YEARS”, Server Farm 2 justifies an investment into the additional server, which was added to the Server Farm 1.

FIG. 2 is a generalized diagram that shows a type of environment, such as Server Farms 1, 2, and 3 in FIG. 1, to which the present invention relates. Shown in FIG. 2 is an application Server Farm 60, a database server 40, and a set of client terminals 81, 82, . . . , CN, having respective I/O modules 71, 72, . . . , N3. The database server 40 is connected to a group of farm servers designated as 10, 20, . . . , N. Each of the servers is able to run application programs designated as 10 p, 20 p, . . . , Np. Network 70 is coupled to Input/Output (I/O) units 12, 22, . . . , N2, on the farm servers 10, 20, . . . , N and to I/O units 71, 72, . . . , N3 on client terminals 81, 82, . . . , CN. Users (clients) can use client terminals 81, 82, . . . , CN to access the application programs in the farm servers via the network 70.

FIG. 3 illustrates an example of the optimization criterion “TOTAL LOSSES+INVESTMENT FOR 5 YEAS”, T, with optimization parameter “Redundancy Factor”. This criterion is a function of Single Server Farm cost, downtime cost, and Redundancy Factor. Single server farm cost, C, is calculated as a product of the cost per user per server and the maximum number of concurrent users per server. Downtime cost for five years, D, is calculated as a product of downtime cost per hour, downtime per year (in hours), and five years. The number of servers in the farm, N, is calculated using the method described in the co-pending patent application U.S. Ser. No. 09/443,926 based on single-server parameters, customer performance requirements, and the given value of the Redundancy Factor. In this example, the optimization criterion “TOTAL LOSSES+INVESTMENT FOR 5 YEARS” is:

T=C*N+D

The value of the optimization criterion for Redundancy Factor equal to 0 (no redundant servers) is greater than $23,000,000 and is not shown in FIG. 3. At the Redundancy+Factor values 5-15% (one redundant server) the optimization criterion is significantly less, equaling about $816,000. The investment in another redundant server (Redundancy Factor 20-25%) is nevertheless justified as downtime losses are significantly reduced. After that point, an increase in the Redundancy Factor results in a negligible reduction of downtime that does not justify investment in additional redundant servers. Therefore, the Redundancy Factor values 20-25% that correspond to two redundant servers are the optimum values of the Redundancy Factor that minimize the value of the optimization criterion “TOTAL LOSSES+INVESTMENT FOR 5 YEARS”.

Now, in accordance with the present invention, steps are provided for optimization of an operating Server Farm designed to serve a particular number of clients “n”. These steps will be described in conjunction with FIG. 4 that shows an estimator program, which performs method steps for estimating the optimum value of one of the possible optimization parameters of an operating Server Farm.

In step A of FIG. 4, requests for the following input parameters are displayed: (1) required number of clients “n” for utilizing said Server Farm, (2) the single server farm cost, (3) the downtime cost per hour, (4) the maximum single-server workload of users, (5) the Mean Time To Repair for a single server, and (6) Mean Time To Failure for a single server.

In step B of FIG. 4, the values of the requested input parameters are entered on the computer monitor by means of keyboard.

In step C of FIG. 4, an optimization parameter and its domain is selected. For example, in FIG. 3, the selected optimization parameter is the Redundancy Factor. The Redundancy Factor domain is an interval between 0 and 100 percent. Other possible optimization parameters are a Server Farm size that is any natural integer number of servers and a normal single-server workload of users which is any number between one and the maximum single server workload.

In step D of FIG. 4, an optimization criterion is selected. In FIG. 3, for example, the selected optimization criterion is “TOTAL LOSSES+INVESTMENT FOR 5 YEARS”:

T=C*N+D.

As mentioned above, the number of servers in the farm, N, is calculated using the method described in the co-pending patent application U.S. Ser. No. 09/443,926 based on single-server parameters, customer performance requirements, and the given value of the Redundancy Factor.

In step E of FIG. 4, optimization of the optimization criterion selected at step D occurs by one of the known optimization techniques described in the books: Practical Optimization by Philip Gill, Academic Press, 1981 and/or Engineering Optimization: Methods and Applications by G. V. Reklaitis and others, John Wiley & Sons, 1983. FIG. 3 illustrates one of the simplest optimization techniques by the plotting of the graph for a set of values of the optimization parameter. Particularly, the used values of the Redundancy Factor are from 0 to 100 percent with a step of 5%: 0, 5, 10, . . . , 100%. If the accuracy of the calculation is not sufficient, then the value of step can be reduced from 5% to 2.5%, etc.

In step F of FIG. 4, the optimum value(s) of the optimization parameter is displayed. For example, in FIG. 3, the optimum values of the redundancy factor 20 and 25 percent that corresponds to two redundant servers. The optimum value of the optimization parameter for the Redundancy Factor 20 or 25 percent is about $297,000.

Described herein has been an optimization method and an estimator program for a Server Farm designed to serve a particular number of clients.

While a preferred implementation of the invention has been described, it should be understood that other implementations may be used which are still encompassed by the attached claims.

Claims (18)

What is claimed is:
1. An estimator program that performs method steps for estimating the optimum operating Server Farm designed to serve a particular number of clients “n” comprising the steps of:
(a) inputting a group of parameters involving at least one parameter involving at least one parameter for Single Server Farm cost evaluation and at least one parameter for downtime cost evaluation; wherein said step (a) of inputting said group of parameters includes the steps of:
(a1) selecting for input said particular number of clients “n” for utilizing said Server Farm;
(a2) selecting for input one parameter for said single Server Farm cost evaluation;
(a3) selecting for input one parameter for said downtime cost evaluation;
(b) selecting at least one Server Farm optimization parameter and its domain which indicates the values that the Server Farm optimization parameter may assume;
(c) selecting a Server Farm optimization criterion that is a function of at least three arguments: (i) said Single Server Farm cost evaluation; (ii) said downtime cost evaluation; (iii) said Server Farm optimization parameter;
(d) using an optimization technique to find the optimum value of the optimization parameter.
2. The method of claim 1 wherein step (a2) involves the Single Server Farm (SSF) cost.
3. The method of claim 1 wherein step (a3) involves the client-license cost per client per server.
4. The method of claim 1 wherein step (a3) involves the downtime cost per client, per hour.
5. The method of claim 1 wherein step (a3) involves the downtime cost per server.
6. The method of claim 1 wherein step (a) further includes the steps of:
(2a4) selecting for input a maximum single server workload of users;
(2a5) selecting for input a Mean Time To Repair (MTTF) for a single server;
(2a6) selecting for input a Mean Time To Failure (MTTF) for a single server.
7. The method of claim 1 wherein step (b) for selecting an optimization parameter includes:
(b1) selecting a Redundancy Factor having a domain which is an interval between 0 and 100 percent.
8. The method of claim 1 wherein step (b) for selecting an optimization parameter includes:
(b2) selecting a Server Farm size which is any natural integer number of servers.
9. The method of claim 1 wherein step (b) for selecting an optimization parameter includes:
(b3) selecting a normal single server workload of users having a domain which is any number between one (1) and the maximum single server workload.
10. The method of claim 1 wherein step (c) for selecting said optimization criteria of three arguments includes:
(c1) selecting an optimization function which is the sum of the entire Server Farm cost and the downtime losses calculated as based on said Single Server Farm (SSF) cost evaluation and said downtime cost evaluation.
11. The method of claim 1 wherein step (c) for selecting said optimization criteria includes:
(c2) selecting an optimization function which is a linear or concave function (up everywhere on the function's domain or down everywhere on the function's domain) of said Server Farm cost evaluation and said downtime losses evaluation.
12. The method of claim 1 wherein step (d) for using said optimization procedure includes the steps of:
(d1) selecting a value of said optimization parameter from said domain;
(d2) calculating said Single Server Farm cost;
(d3) calculating said downtime cost;
(d4) calculating a value of said optimization criterion;
(d5) making an evaluation decision about the end or the continuation of said optimization procedure.
13. The method of claim 12 wherein step (d2) involves the single server cost.
14. The method of claim 12 wherein step (d2) involves the client-license cost per client, per server.
15. The method of claim 12 wherein step (d3) involves the downtime cost per server, per hour.
16. The method of claim 12 wherein step (d3) involves the downtime cost per client, per hour.
17. The method of claim 12 wherein step (d5) involves the decision to stop the procedure if the optimum number of servers in the configured farm is determined.
18. The method of claim 12 wherein step (d5) includes the step of:
(d5a) continuing the optimization procedure if the optimum number for Server Farm size is not yet determined, by repeating said steps (d2) through (d5) with another value of said optimization parameter from said domain.
US09474706 1999-12-29 1999-12-29 Method for server farm configuration optimization Active US6571283B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09474706 US6571283B1 (en) 1999-12-29 1999-12-29 Method for server farm configuration optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09474706 US6571283B1 (en) 1999-12-29 1999-12-29 Method for server farm configuration optimization

Publications (1)

Publication Number Publication Date
US6571283B1 true US6571283B1 (en) 2003-05-27

Family

ID=23884632

Family Applications (1)

Application Number Title Priority Date Filing Date
US09474706 Active US6571283B1 (en) 1999-12-29 1999-12-29 Method for server farm configuration optimization

Country Status (1)

Country Link
US (1) US6571283B1 (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010054095A1 (en) * 2000-05-02 2001-12-20 Sun Microsystems, Inc. Method and system for managing high-availability-aware components in a networked computer system
US20030051021A1 (en) * 2001-09-05 2003-03-13 Hirschfeld Robert A. Virtualized logical server cloud
US20030084157A1 (en) * 2001-10-26 2003-05-01 Hewlett Packard Company Tailorable optimization using model descriptions of services and servers in a computing environment
US20030084156A1 (en) * 2001-10-26 2003-05-01 Hewlett-Packard Company Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US20030187967A1 (en) * 2002-03-28 2003-10-02 Compaq Information Method and apparatus to estimate downtime and cost of downtime in an information technology infrastructure
US20030236822A1 (en) * 2002-06-10 2003-12-25 Sven Graupner Generating automated mappings of service demands to server capacites in a distributed computer system
US6829491B1 (en) * 2001-08-15 2004-12-07 Kathrein-Werke Kg Dynamic and self-optimizing smart network
US6859929B1 (en) * 2000-11-02 2005-02-22 Unisys Corporation Method for server metafarm configuration optimization
US20050081058A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation VLAN router with firewall supporting multiple security layers
US20050102398A1 (en) * 2003-11-12 2005-05-12 Alex Zhang System and method for allocating server resources
US20050251802A1 (en) * 2004-05-08 2005-11-10 Bozek James J Dynamic migration of virtual machine computer programs upon satisfaction of conditions
US20060047542A1 (en) * 2004-08-27 2006-03-02 Aschoff John G Apparatus and method to optimize revenue realized under multiple service level agreements
US7035919B1 (en) * 2001-03-21 2006-04-25 Unisys Corporation Method for calculating user weights for thin client sizing tool
US7039705B2 (en) * 2001-10-26 2006-05-02 Hewlett-Packard Development Company, L.P. Representing capacities and demands in a layered computing environment using normalized values
US7047177B1 (en) * 2001-03-21 2006-05-16 Unisys Corporation Thin client sizing tool for enterprise server farm solution configurator
US7050961B1 (en) * 2001-03-21 2006-05-23 Unisys Corporation Solution generation method for thin client sizing tool
US7062426B1 (en) * 2001-03-21 2006-06-13 Unisys Corporation Method for calculating memory requirements for thin client sizing tool
US20070297344A1 (en) * 2006-06-21 2007-12-27 Lockheed Martin Corporation System for determining an optimal arrangement of servers in a mobile network
US20080250267A1 (en) * 2007-04-04 2008-10-09 Brown David E Method and system for coordinated multiple cluster failover
US20100281181A1 (en) * 2003-09-26 2010-11-04 Surgient, Inc. Network abstraction and isolation layer for masquerading machine identity of a computer
US7886055B1 (en) 2005-04-28 2011-02-08 Hewlett-Packard Development Company, L.P. Allocating resources in a system having multiple tiers
US8078728B1 (en) 2006-03-31 2011-12-13 Quest Software, Inc. Capacity pooling for application reservation and delivery
US8194674B1 (en) 2007-12-20 2012-06-05 Quest Software, Inc. System and method for aggregating communications and for translating between overlapping internal network addresses and unique external network addresses
US8296267B2 (en) 2010-10-20 2012-10-23 Microsoft Corporation Upgrade of highly available farm server groups
US8386501B2 (en) 2010-10-20 2013-02-26 Microsoft Corporation Dynamically splitting multi-tenant databases
US8417737B2 (en) 2010-10-20 2013-04-09 Microsoft Corporation Online database availability during upgrade
EP1484684B1 (en) * 2003-06-06 2013-08-07 Sap Ag Method and computer system for providing a cost estimate for sizing a computer system
US8751656B2 (en) 2010-10-20 2014-06-10 Microsoft Corporation Machine manager for deploying and managing machines
US8775125B1 (en) * 2009-09-10 2014-07-08 Jpmorgan Chase Bank, N.A. System and method for improved processing performance
US8799453B2 (en) 2010-10-20 2014-08-05 Microsoft Corporation Managing networks and machines for an online service
US8850550B2 (en) 2010-11-23 2014-09-30 Microsoft Corporation Using cached security tokens in an online service
US8918782B2 (en) 2011-05-27 2014-12-23 Microsoft Corporation Software image distribution
US9075661B2 (en) 2010-10-20 2015-07-07 Microsoft Technology Licensing, Llc Placing objects on hosts using hard and soft constraints
US20160092322A1 (en) * 2014-09-30 2016-03-31 Microsoft Corporation Semi-automatic failover
US9721030B2 (en) 2010-12-09 2017-08-01 Microsoft Technology Licensing, Llc Codeless sharing of spreadsheet objects
US9959148B2 (en) 2015-02-11 2018-05-01 Wipro Limited Method and device for estimating optimal resources for server virtualization

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496948B1 (en) * 1999-11-19 2002-12-17 Unisys Corporation Method for estimating the availability of an operating server farm

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496948B1 (en) * 1999-11-19 2002-12-17 Unisys Corporation Method for estimating the availability of an operating server farm

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010054095A1 (en) * 2000-05-02 2001-12-20 Sun Microsystems, Inc. Method and system for managing high-availability-aware components in a networked computer system
US7143167B2 (en) * 2000-05-02 2006-11-28 Sun Microsystems, Inc. Method and system for managing high-availability-aware components in a networked computer system
US6859929B1 (en) * 2000-11-02 2005-02-22 Unisys Corporation Method for server metafarm configuration optimization
US7047177B1 (en) * 2001-03-21 2006-05-16 Unisys Corporation Thin client sizing tool for enterprise server farm solution configurator
US7035919B1 (en) * 2001-03-21 2006-04-25 Unisys Corporation Method for calculating user weights for thin client sizing tool
US7062426B1 (en) * 2001-03-21 2006-06-13 Unisys Corporation Method for calculating memory requirements for thin client sizing tool
US7050961B1 (en) * 2001-03-21 2006-05-23 Unisys Corporation Solution generation method for thin client sizing tool
US6829491B1 (en) * 2001-08-15 2004-12-07 Kathrein-Werke Kg Dynamic and self-optimizing smart network
US6880002B2 (en) * 2001-09-05 2005-04-12 Surgient, Inc. Virtualized logical server cloud providing non-deterministic allocation of logical attributes of logical servers to physical resources
US20030051021A1 (en) * 2001-09-05 2003-03-13 Hirschfeld Robert A. Virtualized logical server cloud
US7039705B2 (en) * 2001-10-26 2006-05-02 Hewlett-Packard Development Company, L.P. Representing capacities and demands in a layered computing environment using normalized values
US7054934B2 (en) * 2001-10-26 2006-05-30 Hewlett-Packard Development Company, L.P. Tailorable optimization using model descriptions of services and servers in a computing environment
US20030084156A1 (en) * 2001-10-26 2003-05-01 Hewlett-Packard Company Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US20030084157A1 (en) * 2001-10-26 2003-05-01 Hewlett Packard Company Tailorable optimization using model descriptions of services and servers in a computing environment
US7035930B2 (en) * 2001-10-26 2006-04-25 Hewlett-Packard Development Company, L.P. Method and framework for generating an optimized deployment of software applications in a distributed computing environment using layered model descriptions of services and servers
US20030187967A1 (en) * 2002-03-28 2003-10-02 Compaq Information Method and apparatus to estimate downtime and cost of downtime in an information technology infrastructure
US20030236822A1 (en) * 2002-06-10 2003-12-25 Sven Graupner Generating automated mappings of service demands to server capacites in a distributed computer system
US7072960B2 (en) * 2002-06-10 2006-07-04 Hewlett-Packard Development Company, L.P. Generating automated mappings of service demands to server capacities in a distributed computer system
EP1484684B1 (en) * 2003-06-06 2013-08-07 Sap Ag Method and computer system for providing a cost estimate for sizing a computer system
US20100281181A1 (en) * 2003-09-26 2010-11-04 Surgient, Inc. Network abstraction and isolation layer for masquerading machine identity of a computer
US8331391B2 (en) 2003-09-26 2012-12-11 Quest Software, Inc. Network abstraction and isolation layer for masquerading machine identity of a computer
US7451483B2 (en) 2003-10-09 2008-11-11 International Business Machines Corporation VLAN router with firewall supporting multiple security layers
US20090031413A1 (en) * 2003-10-09 2009-01-29 International Business Machines Corporation VLAN Router with Firewall Supporting Multiple Security Layers
US20050081058A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation VLAN router with firewall supporting multiple security layers
US7581008B2 (en) * 2003-11-12 2009-08-25 Hewlett-Packard Development Company, L.P. System and method for allocating server resources
US20050102398A1 (en) * 2003-11-12 2005-05-12 Alex Zhang System and method for allocating server resources
US8566825B2 (en) 2004-05-08 2013-10-22 International Business Machines Corporation Dynamic migration of virtual machine computer programs upon satisfaction of conditions
US20050251802A1 (en) * 2004-05-08 2005-11-10 Bozek James J Dynamic migration of virtual machine computer programs upon satisfaction of conditions
US8156490B2 (en) 2004-05-08 2012-04-10 International Business Machines Corporation Dynamic migration of virtual machine computer programs upon satisfaction of conditions
US20060047542A1 (en) * 2004-08-27 2006-03-02 Aschoff John G Apparatus and method to optimize revenue realized under multiple service level agreements
US9172618B2 (en) 2004-08-27 2015-10-27 International Business Machines Corporation Data storage system to optimize revenue realized under multiple service level agreements
US8631105B2 (en) 2004-08-27 2014-01-14 International Business Machines Corporation Apparatus and method to optimize revenue realized under multiple service level agreements
US7886055B1 (en) 2005-04-28 2011-02-08 Hewlett-Packard Development Company, L.P. Allocating resources in a system having multiple tiers
US8078728B1 (en) 2006-03-31 2011-12-13 Quest Software, Inc. Capacity pooling for application reservation and delivery
US20070297344A1 (en) * 2006-06-21 2007-12-27 Lockheed Martin Corporation System for determining an optimal arrangement of servers in a mobile network
US20100241896A1 (en) * 2007-04-04 2010-09-23 Brown David E Method and System for Coordinated Multiple Cluster Failover
US20080250267A1 (en) * 2007-04-04 2008-10-09 Brown David E Method and system for coordinated multiple cluster failover
US8429450B2 (en) 2007-04-04 2013-04-23 Vision Solutions, Inc. Method and system for coordinated multiple cluster failover
US7757116B2 (en) 2007-04-04 2010-07-13 Vision Solutions, Inc. Method and system for coordinated multiple cluster failover
US8194674B1 (en) 2007-12-20 2012-06-05 Quest Software, Inc. System and method for aggregating communications and for translating between overlapping internal network addresses and unique external network addresses
US8775125B1 (en) * 2009-09-10 2014-07-08 Jpmorgan Chase Bank, N.A. System and method for improved processing performance
US9015177B2 (en) 2010-10-20 2015-04-21 Microsoft Technology Licensing, Llc Dynamically splitting multi-tenant databases
US8386501B2 (en) 2010-10-20 2013-02-26 Microsoft Corporation Dynamically splitting multi-tenant databases
US8296267B2 (en) 2010-10-20 2012-10-23 Microsoft Corporation Upgrade of highly available farm server groups
US8799453B2 (en) 2010-10-20 2014-08-05 Microsoft Corporation Managing networks and machines for an online service
US9075661B2 (en) 2010-10-20 2015-07-07 Microsoft Technology Licensing, Llc Placing objects on hosts using hard and soft constraints
US9043370B2 (en) 2010-10-20 2015-05-26 Microsoft Technology Licensing, Llc Online database availability during upgrade
US8417737B2 (en) 2010-10-20 2013-04-09 Microsoft Corporation Online database availability during upgrade
US8751656B2 (en) 2010-10-20 2014-06-10 Microsoft Corporation Machine manager for deploying and managing machines
US8850550B2 (en) 2010-11-23 2014-09-30 Microsoft Corporation Using cached security tokens in an online service
US9721030B2 (en) 2010-12-09 2017-08-01 Microsoft Technology Licensing, Llc Codeless sharing of spreadsheet objects
US8918782B2 (en) 2011-05-27 2014-12-23 Microsoft Corporation Software image distribution
US20160092322A1 (en) * 2014-09-30 2016-03-31 Microsoft Corporation Semi-automatic failover
US9836363B2 (en) * 2014-09-30 2017-12-05 Microsoft Technology Licensing, Llc Semi-automatic failover
US9959148B2 (en) 2015-02-11 2018-05-01 Wipro Limited Method and device for estimating optimal resources for server virtualization

Similar Documents

Publication Publication Date Title
US6898564B1 (en) Load simulation tool for server resource capacity planning
US6035307A (en) Enterprise data movement system and method including opportunistic performance of utilities and data move operations for improved efficiency
US20030046615A1 (en) System and method for adaptive reliability balancing in distributed programming networks
US20040226013A1 (en) Managing tasks in a data processing environment
US20020143997A1 (en) Method and system for direct server synchronization with a computing device
US20020040639A1 (en) Analytical database system that models data to speed up and simplify data analysis
US5978577A (en) Method and apparatus for transaction processing in a distributed database system
US6016501A (en) Enterprise data movement system and method which performs data load and changed data propagation operations
US20060155912A1 (en) Server cluster having a virtual server
US20040205414A1 (en) Fault-tolerance framework for an extendable computer architecture
US6029178A (en) Enterprise data movement system and method which maintains and compares edition levels for consistency of replicated data
US6401111B1 (en) Interaction monitor and interaction history for service applications
US20040243699A1 (en) Policy based management of storage resources
US20090182780A1 (en) Method and apparatus for data integration and management
US5771343A (en) System and method for failure detection and recovery
US6823356B1 (en) Method, system and program products for serializing replicated transactions of a distributed computing environment
US20080263227A1 (en) Background synchronization
US5475813A (en) Routing transactions in the presence of failing servers
US20060010130A1 (en) Method and apparatus for synchronizing client transactions executed by an autonomous client
US6898600B2 (en) Method, system, and program for managing database operations
US20030014507A1 (en) Method and system for providing performance analysis for clusters
US20090265458A1 (en) Dynamic server flow control in a hybrid peer-to-peer network
US6654771B1 (en) Method and system for network data replication
US20100005097A1 (en) Capturing and restoring database session state
US6944788B2 (en) System and method for enabling failover for an application server cluster

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SMORODINSKY, LEV;REEL/FRAME:010502/0990

Effective date: 19991228

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION, DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS CORPORATION,PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION,DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023312/0044

Effective date: 20090601

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION, DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS CORPORATION,PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

Owner name: UNISYS HOLDING CORPORATION,DELAWARE

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CITIBANK, N.A.;REEL/FRAME:023263/0631

Effective date: 20090601

AS Assignment

Owner name: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERA

Free format text: PATENT SECURITY AGREEMENT (PRIORITY LIEN);ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:023355/0001

Effective date: 20090731

AS Assignment

Owner name: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERA

Free format text: PATENT SECURITY AGREEMENT (JUNIOR LIEN);ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:023364/0098

Effective date: 20090731

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT, IL

Free format text: SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:026509/0001

Effective date: 20110623

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY;REEL/FRAME:030004/0619

Effective date: 20121127

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE;REEL/FRAME:030082/0545

Effective date: 20121127

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:042354/0001

Effective date: 20170417

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT

Free format text: SECURITY INTEREST;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:044144/0081

Effective date: 20171005

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION);REEL/FRAME:044416/0358

Effective date: 20171005