WO2004084085A1 - Load distributing system by intersite cooperation - Google Patents

Load distributing system by intersite cooperation Download PDF

Info

Publication number
WO2004084085A1
WO2004084085A1 PCT/JP2003/003273 JP0303273W WO2004084085A1 WO 2004084085 A1 WO2004084085 A1 WO 2004084085A1 JP 0303273 W JP0303273 W JP 0303273W WO 2004084085 A1 WO2004084085 A1 WO 2004084085A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
step
service
load
characterized
Prior art date
Application number
PCT/JP2003/003273
Other languages
French (fr)
Japanese (ja)
Inventor
Tsutomu Kawai
Satoshi Tutiya
Yasuhiro Kokusho
Original Assignee
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Limited filed Critical Fujitsu Limited
Priority to PCT/JP2003/003273 priority Critical patent/WO2004084085A1/en
Publication of WO2004084085A1 publication Critical patent/WO2004084085A1/en
Priority claimed from US11/050,058 external-priority patent/US20050144280A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing

Abstract

A system comprises a front-stage center (12-1) for directly receiving a request from a client (10) through a network (11) and a back-stage center (12-2) for receiving the request from the client (10) through the front-stage center (12-1). The centers have auxiliary servers (17-1, 17-2), respectively. The front-stage center (12-1) provides a service by using a normal server. A system controller (16-1), on detecting that the load on the server increases, provides a server for providing the service the load of which increases from the auxiliary server (17-1) commonly provided for a service 1 and a service 2. If the load cannot be supported even by the provision of the server, the system controller (16-1) issues an instruction to a system controller (16-2) of the back-stage center (12-2) to support the provision of the service. When the back-stage controller (12-2) cannot support the load by using a normal server, it supports the load by using the auxiliary server (17-2).

Description

Load distribution system art by specification site cooperation between

The present invention relates to a load balancing system according to intersite cooperation. BACKGROUND

Internet explosive growth required in Yotsute service provider side to the server, resources such as a network has become enormous. However, the amount of user or these requirements has been found to vary significantly with time and conditions, concentration maintenance of wasted resources during normal if to secure the resources together at concentration required next, whether the Te when sometimes in the can not cope resources, so that the discomfort to the user leads to a decrease in quality of service. Further to estimate the upper limit of the required resources with the increase of the number of users is difficult, it becomes necessary for a system that allocates as needed resources. At the same time excessive resources are managed for causing a cost increase, and also required specification Mi for effectively utilizing a not needed resources.

Figure 1 is an example of a conventional load distribution system.

In the configuration of FIG. 1, the client 1 0 accesses the Data Center 1 2 via the network 1 1 serviced. The load balancer 1 3, a plurality of servers 1 4 are connected.

If the one server not handle, the server as shown in Figure 1 is installed multiple, distributes the load to multiple servers by placing the load balancer 1 3 to the front, to improve the service quality but the server 1 4 additional determination and servers 1 4. load balancer 13 large cost since additional working configuration changes that often take place manually, also it is necessary to always secure a server according to the maximum load it takes.

In Patent Document 1, although defines sorting method requests from the server additional and user, it is necessary to incorporate a mechanism for server selection on the user side, suitable for application to services unspecified number for not. In addition, there is a problem that will require the exchange of management information other than the request.

In addition, the method of Patent Document 2, can be applied only in the case to deliver static information, can not be applied to return the'll go-between different information each time to request from the service providing such as a user.

Furthermore, it is assumed the case of static information also Patent Document 3 does not consider the case where the load on the file server or the like was excessively summer.

Patent Document 1

JP 9 one 106 381 JP

Patent Document 2

JP 9 one 1 79820 JP

Patent Document 3

JP 2002- 259354 Patent disclosure of the invention

An object of the present invention is to distribute the load for the service provider is to provide a load balancing system that can flexibly respond to changes in demand from the user.

The method of the present invention is a load balancing method of a device having the multiple servers providing services via a network to a client, for sharing the load of the server that provides normal service, either in the initial state providing a plurality of standby server that is not also set services, in anticipation of increased load of a server that provides normal service, by setting the Abu Rikeshiyon for service to be provided to the backup server, and providing server of the service, characterized in that it comprises a control step of sharing the load with the server to provide Hisage normal service. According to the present invention, in an apparatus such as a data center, in addition to the service over path to provide normal service, a plurality of standby server, when the load of the servers that provide normal service was increased summer is spare server in Abu as capable of providing the service Riquet - Nyon by installing, to share the load for providing the service of the server.

Further, in another aspect, in accordance with the present invention, the apparatus having a spare server connected by Nettowa over click, to each other, by controlling so as mutually provide redundant server, in one data center, transient load the even impossible to obtain a higher processing capability supported by a plurality of devices dealing with load cooperate via a network, it is possible to avoid interruption of service provided by the large load. Also, more this, one can reduce the number of spare server provided in the apparatus, the redundant hardware, it is not necessary to provide each device. BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1 is an example of a conventional load distribution system.

Figure 2 is a diagram showing a basic configuration of an embodiment of the present invention.

Figure 3 is a diagram showing a network arrangement in the center in the basic configuration of FIG.

Figure 4 is a diagram showing a first embodiment of the present invention.

Figure 5 is a diagram showing the operation of the first embodiment of the present invention.

Figure 6 is a diagram showing the data for calculating the load and capacity of the server. Figure 7 is a diagram showing the data for the selected server, depending on the load.

Figure 8 is a diagram showing the relationship between the ability of the server to be added with the predicted value of the load. Figure 9 is a diagram illustrating a configuration for sharing a spare server with multiple services. Figure 1 0 is Mel a diagram showing a configuration of a case where the providing of spare server among different centers.

Figure 1 1 is a diagram for explaining the operation of an embodiment of the present invention.

Figure 1 2 is a diagram for explanation about the secure network bandwidth when achieving cooperation with other centers.

Figure 1 3 is a diagram illustrating an application example of an embodiment of the present invention in the web server. 1 4, FIG der Figure 1 5 shows an example of the application of the embodiment of the present invention in the web service, in applications of the implementation of the invention in the case where among comparable centers mutual flexible resources to each other is there.

Figure 1 6 is a diagram showing an example in which to apply the embodiment of the present invention in the case of a front center without a backup server.

1 7 to 2 4 is a flowchart for explaining the operation of an embodiment of the present invention in the absence of cooperation between the database provided in the center.

Figure 2. 5 to FIG. 3 0 is a flowchart showing the flow of processing in the embodiment of the present invention when there is cooperation database. BEST MODE FOR CARRYING OUT THE INVENTION

In the present invention, to predict the required amount of change from the user, Detase the printer accordingly, or, dynamically add servers in another data center that works to ensure the service quality Rukoto delete, at the same time Rukoto to share the surplus servers with multiple services which aims at cost reduction at. Figure 2 is a diagram showing a basic configuration of an embodiment of the present invention.

Client 10 via the network 1 1, via the load balancer 1 3_ 1 of the front center 1 2-1 accesses the We b server 1 5-1. We b servers 1 5 1 results of data processing in the client 10, the database server path 14 1 or accesses the file server 14 one 2, Keru receive the service. Subsequent server 1 2 2 Ri Contact with almost the same configuration as the front center 1 2 1 accepts the request from the client 10 via the load balancer 1 3 1, load balancer 1 3-2 in while load balancing, guiding the client 10 to the We b server 15 2. Then, the client 1 0 is to via We b server 1 5-2 accesses the database server 14 3 or 14 _ 4, receive services.

Here, the front center 1 2-1 refers to the center to receive a request for user directly indicates the cell pointer processing user requests through front center 12- 1 and subsequent center 12-2. Server assignment of between data centers are many-to-many relationship, the server at the same time requests from the case and a plurality of data center with the Data Center to utilize a plurality of data center server, even if certain data center responds is there. Load state of the load state Ya client server system controller 1 6- 1,

16 2 performs the aggregation and determination 'application, sets the result to the server 14 one 1-14 one 4 and the load balancer 1 3 1, 1 3-2. After having set the server 1 7_ 1, 1 7 2 in spare server as a server for the required functions when the server resource is insufficient, and added to the service, improve the ability.

Figure 3 is a diagram showing a network arrangement in the center in the basic configuration of FIG.

Physical network configuration connects all servers immediately below the single switch group 20, logically independent networks (VLAN0, VLAN 1 1, VL AN 1 2, VL AN 2 1) a plurality thereon Configure. The server addition process to the required position by such an arrangement makes it possible to automate.

Additional servers, in the case of the deletion, out lead to CPU performance Ya network configuration such as a server capacity from servers specifications, it performs a calculation of the environment der connexion also required server to various types of hardware are mixed, properly perform the assignment of the server. Simultaneously calculating such traffic to that server, secure network bandwidth also are properly arbitrate.

Furthermore, to add servers before the load measurement and load fluctuation prediction becomes a excessive load to predict the future load, to achieve a quality of service assurance.

Figure 4 is a diagram showing a first embodiment of the present invention.

In the drawing to omit the detailed description are denoted by the same reference numerals are used for the same components as in FIG. 2.

If you exceed the capabilities of the server a request from the user is assigned, it occurs increases or no response in the response time will give an unpleasant feeling to the user. Further, when the load is increased in this state, there is a case where server failure is caused. To prevent this, performs the measurement of the load state of the server system controller 1 6, if it is determined to cause a problem with the current number of servers performs additional server from the backup server 1 7, applications and services, for setting and introduction of data to be utilized. Then, incorporated into service by updating the settings of the device and the server or the like which are dependent.

Figure 5 is a diagram showing the operation of the first embodiment of the present invention.

In the figure, it omits will be denoted by the same reference numerals to the same components as FIG.

If the request of the user is decreased, the surplus server occurs. Not decrease the service quality as the deleted surplus server portion of this to open a spare server from the viewpoint of improvement in operational costs Ya utilization rather, it is desirable to use in other services. Thus eliminating the cooperation with service by deleting the configuration relevant from a device in dependency performs processing such as releasing the subsequent set, back to the backup server 1 7.

Figure 6 is a diagram showing the data for calculating the load and capacity of the server. To add or delete as needed service capacity, time was information whether there server provides which it only of service capacity is required. Oite the data center or the like, servers and equipment utilized, and service capability per Yunitto by a combination of applications and services will change. Aligning a server to be used in what uniform, In its your like when a plurality of data centers to work is practically impossible, is necessary to calculate the service capacity from the equipment specifications of CPU Ya memory, therefore is there. Therefore, from the performance value in a typical configuration

Taking into account the difference in CPU power, such as using a method for estimating the performance value.

7, Oh a diagram showing the data for the selected servers in accordance with the magnitude of the load o.

Here, not only service capability, to retain the information that any application on whether or not to use the preferred characteristics of the server Interview knit. For performance value for each server that can be utilized as described above is not uniform, it is necessary to create a structure that can provide capabilities needed in combination. Thus from performance and characteristics and the required performance value calculated in FIG. 6, the required amount with priority higher recommendation degree server selects the server Mitasuma utilizes.

Figure 8 is a diagram showing the relationship between the ability of the server to be added with the predicted value of the load. Only requests amount measured to add resources to exceed the service capacity, quality of service can not be guaranteed in cases such as when a sudden load is being raised. Therefore to identify trends in the load, prevent the degradation of service quality by advance add service capability commensurate with the requests amount predicted if the increase in requests amount is expected. As a method of prediction, it is considered such conduct and linear extrapolation.

Figure 9 is a diagram illustrating a configuration for sharing a spare server with multiple services. If looking at the load state of the plurality of services in a certain data center is quite able to all services is high load simultaneously noble, than securing the spare resources for each service when not utilized resources are always present Conceivable. By sharing the spare resources by a plurality of services, it is possible to add the service capabilities required in fewer backup resources as a whole. Further, it is possible to disperse the maintenance costs by sharing. The center 1 2, services 1 and 2 are mounted, the load balancer 1 3 each - 1, 1 3 - 2 are provided. The service 1, W eb server 1 5 1, the database server 1 4 1, the file server 1 4- 2 is provided. The service 2, the server 2 5 is provided. Preliminary server 1 7 is provided in common to the service 1 and service 2, the system controller 1 6 watches the status of the load, from the backup server 1 7, the service 1 or the service 2, introducing additional server to.

Figure 1 0 is a view showing the structure for performing the provision of spare server among different centers.

In the figure, the same components as in FIG. 2 are denoted by the same reference numerals will be omitted.

The scale of the data center 1 2 1, there is even physically or economically sufficient spare server 1 7-1 can not secure a case even if shared spare server among different services. The may occur if not be financed the backup server in the data center by even a collision voluntarily be intended had been secured sufficiently load. In such a case, a different data center 1 2 2 connected by a network and subsequent center utilizes the backup server 1 7 2 via the network.

Figure 1 1 is a diagram for explaining the operation of an embodiment of the present invention.

In the figure, the same components as in FIG. 9 are designated by the same reference numerals, the description is omitted.

By the service go-between, it is what you need a server that works directly to the user and the one base data other than the server to exchange information scan or the like. For such services, it can not be expected performance improvement unless additional servers to processing power and confirms the load state proper function for each function. Therefore the system controller 1 6, confirms the load for each floor layer, when adding or deleting aim to increase or decrease the capacity by changing the settings of the servers in cooperation.

Figure 1 2 is a diagram for explanation about the secure network bandwidth when achieving cooperation with other centers.

In the figure, we have the same reference numerals to the same components as FIG. 1 0. If multiple services are required and cooperation process when operating at the same time, can not be obtained charge amount of processing power to be arbitrated traffic between the services and functions not only to add servers. The bandwidth required in each portion is calculated and the like put out sufficient performance as a whole by performing secure each band to the network in consideration of its et ratio.

According to the above configuration, monitors the state of the load and the server capacity from the user, the load is sufficient resources required before exceeding server capacity, the data center, or to be able to allocate from the data center for collaboration , it is possible to quality of service to the requests from the user warranty. Preliminary servers required simultaneously it becomes possible to share a wide range, it is possible to reduce the total amount of required server as a whole. In addition, even in the services that the server with a plurality of functions each other cooperation, since it is the child that is responsible for additional servers for functions has become a bottleneck, it is possible to sufficiently large scale. For further capable automate the entire process is quick can follow the change in the demand from the user.

Figure 1 3 is a diagram illustrating an application example of an embodiment of the present invention in the web server. In the figure, the same components as FIG. 1 2 is omitted will be denoted by the same reference numerals.

Load the lighter state, it is operated only in front center 1 2 1. If the load is increased, front center 1 2 - add a spare server 1 7-1 in 1 as a web server 1 5-1. Further load creates a Webusa over server group 1 5 2 to come the subsequent center 1 2 2 increases, the rear stage center 1 2 - so that charge of the load even 2. 1 4, Ru FIG der showing an application example of an embodiment of the present invention in a Web service.

In the figure, the same components as FIG. 1 2, is omitted the description with the same reference numerals.

In this example, the web service Uwebusaba 1 5 1 and the database server 1 4 - 1, and a combination of file server 1 4 _ 2. Load is not light state is operated only in front center 1 2 1. With increasing load, bottlenecks and Natsuta portion sequentially spare server 1 7 - 1 will conduct additional, the subsequent center 1 2 2 if it is no longer fully financed by the preceding cell pointer 1 2 1 the cooperation is carried out. Database server 1 4 1 In this example, in cooperation between the front center 1 2 _ 1 and the rear center 1 2 2 also performs data synchronization. This realized by performing the creation and bandwidth reservation VL AN straddling between centers.

Figure 1 5 is, inter comparable centers is an application example of the implementation of the invention in the case of mutual flexible resources to each other.

If the processing capability of the service 1 in the center 1 is no longer met spare server 3 0 1 hoax in center 1, requesting cooperation with respect to the center 2 server in the center 2 (hatched portion 及 Pi spare server 3 0- 3) to use. Further if you 枯渴 also server capacity in center 2 (case where the capacity, including the spare server 3 0 2 is 枯渴) server (shaded in center 3 requests the cooperation for another cell pointer 3 parts and spare server 3 0 - 3) utilize.

Figure 1 6 is a diagram showing an example in which to apply the embodiment of the present invention in the case of a front center without a backup server.

In front center 1 2-1, the system control unit 1 6 with the server is insufficient for serving - when one determines the subsequent center 1 2 - ask cooperation 2, subsequent center 1 2- to use a server in the 2. Here, the load balancer provides 及 beauty W eb servers for services 1 and 2. Service 1, service 2, is the server and, intends row the service of the service 1 及 Pi service 2, respectively. Furthermore, the subsequent center 1 2 2, when the capacity of the server is no longer enough to add only the necessary spare server 1 7 in the respective service. Additional and judgment, the cooperation with the front center 1 2 1, is carried out the system control unit 1 6 _ 2.

1 7 to 2 4, coordination of such record between databases provided in the center, a Furochiya one you want to explain the operation of an embodiment of the present invention in the case.

Figure 1 7 is a flowchart showing the overall flow of the system controller. First, at step S 1 0, for load measurement. In step S 1 1, it predicted capacity determines whether exceeds the allocated capacity. Step If S 1 1 determination is YES, in Step S 1 2, the process proceeds to step S 1 5 performs additional processing power. In step S 1 5, it forces the numerical value that is a wait 1 0 seconds is one designer to be set appropriately.

Determination in step S 1 1 is NO, at step S 1 3, the current capacity is equal to or less than one-half of the allocated capacity. Step If S 1 3 determination is YES, in Step S 1 4, performs a reduction of the processing power, the process proceeds to step S 1 5. When Step S 1 3 determination is NO, the process proceeds to step S 1 5.

After the step S 1 5 it is, again returns to the step S 1 0.

Figure 1 8 is a diagram showing details of the load measuring step S 1 0 in Figure 1 7. In step S 2 0, to gather the average number of treatment 1 0 seconds using the server. Because the 1 0 seconds, which should match the value of the step S 1 5 in Figure 1 7. In step S 2 1, the total average number of processes to calculate, to add to the measurement history. In step S 2 2, it is determined whether the measurement history is more than four terms. If the scan Tetsupu S 2 2 of the decision is NO in step S 2 3, as the predicted value after 3 0 seconds recent history, the process proceeds to step S 2 5. If judgment is YES in step S 2 2, in step S 2 4, calculates a predicted value after 3 0 seconds similar minimum 2 NoKon the latest 4 history, the process proceeds to step S 2 5. This is the most recent 4 history, a regression curve using the regression curve, is to obtain a predicted value after 3 0 seconds. In step S 2 5, sets the predicted value after 3 0 seconds, in step S 2 6, returns to the flow of FIG. 1 7 by setting the latest history to the current value.

Figure 1 9 is Ru FIG der showing the details of step S 1 2 processing capacity addition process of FIG 7.

In step S 3 0, to determine the additional capacity amount by subtracting the current allocation value from the predicted value. In step S 3 1, it is determined whether there is spare server in center. Step If S 3 1 of judgment is YES, the step Te S 3 2 Odor, select additional servers in the center. In step S 3 3, it is determined whether the amount additional capacity is satisfied. Step If S 3 3 determination is N_〇 is to scan Tetsupu S 3 4, if the determination is YES, the process proceeds to step S 3 8. Step S 3 1 of judgment is NO, the process proceeds to Step S 3 4. 3 in 4, it is determined whether there is cooperation destination center with preliminary processing capability. Step If S 3 4 determination is YES, and have contact to Step S 3 6, allocating capacity in cooperation center. In step S 3 7, additional processing power amount is determined whether or not satisfied. Step S 3 7 determination is NO, the process proceeds to Step S 3 4. When Step S 3 7 determination is YES, the process proceeds to scan Tetsupu S 3 8. Step If S 3 4 determination is NO, in Step S 3 5, warns unsatisfiable additional capacity amount administrator, the process proceeds to Step S 3 8. In step S 3 8, set the VLAN to include the selected server, in step S 3 9, set the application to the selected server, the process proceeds to scan Tetsupu S 4 0.

In Step S 4 0, it is determined whether there is cooperation between centers, if the determination is NO, the process proceeds to step S 4 3. The the case of YES in step S 4 0, in step S 4 1, to set the cooperation determination of the center load distribution ratio, and allocation Te system, in step S 4 2, the coordination center and self-center set the communication band between, the process proceeds to step S 4 3. In Step S 4 3, to determine the load distribution ratio of the own center, to determine the allocation device, Ru returns to the flow of FIG 7.

2 0 is a flow illustrating a process of selecting additional servers Step S 3 2 in Fig. 1 9 in detail.

In step S 5 0, whether there is a server for the required applications Ru is determined. Step S 5 0 judgment is NO, the process proceeds to step S 5 4. If Sutetsu flop S 5 0 judgment is YES, in Step S 5 1, in applications requiring directed only server determines whether there is a server capable satisfy the additional capacity amount per unit. Step If S 5 1 of decision is NO, in Step S 5 2, select the server up performance required applications, the flow returns to step S 5 0. Step If S 5 1 of decision is YES, of the needed application server, among the can cover server additional processing power amount one, select the server minimum performance, the process proceeds to step S 5 8. .

In step S 5 4, it is determined whether there is an available server. If the scan Tetsupu S 5 4 judgment is NO, the process proceeds to step S 5 8. Step If S 5 4 the determination is YES, in Step S 5 5, it is determined whether a server satisfiable additional capacity amount per unit. Step If S 5 5 determination is N_〇, in step S 5 6, select the server up performance, the flow returns to step S 5 4. Step If S 5 5 judgment is YES, and have contact to Step S 5 7, of the server that can satisfy the additional capacity amount one, select the server with the lowest performance, in Step S 5 8 move on. In step S 5 8, constitutes the assigned server list, the process returns to FIG 9.

Figure 2 1 is a flowchart showing the flow of the cooperative center processing capability allocation process in step S 3 6 of FIG 9.

In step S 6 0, the processing capacity limit due to bandwidth determines squid smaller than allocated desired value. Step S 6 0 judgment is NO, the process proceeds to step S 6 2. Step If S 6 0 decision is YES, Te Step S 6 1 smell, and bandwidth limit the quota limit, the process proceeds to step S 6 2.

In step S 6 2, and requests the server selection in cooperation destination center, in step S 6 3, then select additional servers in a cooperation destination center, in have you to Step S 6 4, constitute the assigned server list Te, the process returns to FIG 9.

2 2 is a detailed flow of the application setting in step S 3 9 of FIG 9.

In step S 7 0, it is determined whether there is cooperation between centers. If Sutetsu flop S 7 0 judgment is NO, the process proceeds to step S 7 4. When Step S 7 0 judgment is YES, in Step S 7 1, to determine whether the transferred or not the § over power drive applications. When Step S 7 1 of determination is YES, the process proceeds to step S 7 3. When Step S 7 1 of determination is NO, the Sutetsu flop S 7 2, and transfers the archive application cooperation destination center, the process proceeds to step S 7 3. In step S 7 3, to install the application Chillon additional server, the process proceeds to step S 7 4. In step S 7 4, and Insutoru applications to add servers within Jise printer returns to the processing in FIG 9. 2 3, Ru flow der showing a process of reducing the processing capability of Step S 1 4 in FIG 7.

In step S 8 0, subtracting the current measured value from the assigned values ​​to determine the reduced capacity amount. In step S 8 1, it is determined whether there is a coordination center. Step If S 8 1 of determination is YES, in Step S 8 2, to determine the reduction server coordination center, in step S 8 3, all servers in cooperation center determines whether or not reduced. Step S 8 3 determination is YES, the flow returns to step S 8 1. Step S 8 3 determination is NO, the process proceeds to stearyl-up S 8 5. Step If S 8 1 of determination is NO in step S 8 4, to determine the reduction servers in its own center, the process proceeds to step S 8 5.

In step S 8 5, the determination of the load distribution ratio of the own center, and sets the allocation device. In step S 8 6, it determines the load distribution ratio of the coordination center to set the allocation unit. Then, in step S 8 7, wait for the completion of Yuzariku Est process. In step S 8 8, remove the applique Shiyon from reductions server, in step S 8 9, sets the VLAN to include the remaining servers only (set the cooperative network channel), Te step S 9 0 smell, to determine whether or not there is a cancellation of cooperation. The the case of YES in step S 9 0, in step S 9 1, to release the band of coordination centers and self center returns to the processing in FIG 7. Even when the determination is NO in step S 90, the process returns to FIG. 17. Figure 24 or step S 82 in FIG. 23 is a flowchart showing the selection processing of reducing the server in step S 84.

In step S 100, it is determined whether there is a server available to other applications. If the determination in step S 100 is NO, the process proceeds to step S 103. If the determination of stearyl-up S 100 is YES, in step S 101, it is determined whether there is low performance server than the remaining reduced ten-producing ability. The determination in step S 101 is NO, the process proceeds to step S 103. Step If the determination in S 101 is YE S, in step S 102, among the performance than the remaining reduction performance of lower servers, the process proceeds to step S 100 to reduce the server's maximum performance.

In step S 103, it is determined whether there is a server currently in use. If the determination of stearyl-up S 103 is NO, the process proceeds to step S 106. Step If the determination in S 103 is YE S, in step S 104, it is determined whether there is low performance server than the remaining reduction performance. The determination in step S 104 is NO, the process proceeds to step S 106. If the determination in step S 104 is YES, in Step S 105, among the performance than the remaining reduction performance of low server, to reduce the server's maximum performance, the flow returns to step S 103.

In step S 106, and generates a server list that has been removed, Ru returns to the process in FIG. 23.

Figures 25-30 are flowcharts showing the flow of processing in the embodiment of the present invention when there is cooperation database.

Figure 25 is a flowchart showing the flow of the entire process of its own center of performing collaboration request. In step S 1 10, perform load measurement We b server. In step S 1 1 1, it determines whether prediction processing capacity is greater than the allocated capacity. When Step S 1 1 1 of the determination of YE S, in step S 1 1 2, to add the W eb capacity, it proceeds to step S 1 1 5. Step S 1 1 1 of the decision is NO in step S 1 1 3, the current processing capability to determine smaller than 1 or 2 minutes of allocated seen capacity. Step S 1 1 3 of determination in the case of N_〇, the process proceeds to step S 1 1 5. Step If S 1 1 3 determination is YES, in Step S 1 1 4, to reduce the ability of the W eb processing proceeds to scan Tetsupu S 1 1 5. In step S 1 1 5, measures the load of the center in the database. In step S 1 1 6, the prediction processing capacity is determined whether greater than the allocated capacity. Step S 1 1 6 of the determination is YES - the case, in step S 1 1 7, performs additional database capacity, it proceeds to scan Tetsupu S 1 2 0. Step If the determination in S 1 1 6 is NO, in step S 1 1 8, the current processing capability to determine smaller than 1 or 2 minutes of the allocated capacity. The determination in step S 1 1 8 is NO, the process proceeds to Step S 1 2 0. Step If the determination in S 1 1 8 is YES, in Step S 1 1 9, performs the reduction of processing capacity of de one database, the process proceeds to Step S 1 2 0. In step S 1 2 0, wait for 1 0 seconds. This latency should be appropriately set by the designer. Step After the S 1 2 0, once again, the people to step S 1 1 0

2 6 is a flowchart showing the flow of overall processing of the cooperation destination center.

In Step S 1 3 0, measures the database load in center. In stearyl-up S 1 3 1, the prediction processing capacity is determined whether greater than the allocated capacity. Step If S 1 3 1 determination is YES, in Step S 1 3 2, add the database capacity, proceeds to Step S 1 3 5. If Sutetsu flop S 1 3 1 determination is NO, in Step S 1 3 3, the current processing power to determine smaller than 1 or 2 minutes of the allocated capacity. When Step S 1 3 3 determination is NO, the process proceeds to Step S 1 3 5. Step If S 1 3 3 determination is YES, in Step S 1 3 4, performs a reduction database processing power, the process proceeds to Step S 1 3 5. In Step S 1 3 5, 1 0 seconds for, the flow returns to step S 1 3 0. The 1 0 seconds rather than to be limited thereto, should be appropriately set by the designer.

2 7 is a flowchart showing the detailed processing of measuring web load measurement or database load meter performed in each center.

In Step S 1 4 0, to collect a number of processes mean 1 0 seconds using the server. The 1 0 seconds, Step S 1 2 0 in FIG. 2 5, it should be the same value as the waiting time of Step S 1 3 5 of FIG 6. In Step S 1 4 1, a total average processing speed is calculated and added to the measurement history. In Step S 1 4 2, it is determined whether the measurement history is more than four terms. Step S when 1 4 2 determination is NO, in Step S 1 4 3, as a prediction value after 3 0 seconds recent history, the process proceeds to Step S 1 4 5. Step If S 1 4 2 determination is YES, in Step S 1 4 4, to derive a predicted value after 3 0 seconds least square approximation from the latest 4 history, the process proceeds to stearyl-up S 1 4 5 . This manner of derivation is as described in FIG 8. In Step S 1 4 5, sets the predicted value after 3 0 seconds. In Step S 1 4 6, return to set the latest history to the current value in the process of FIG. 2 5, 2 6. 2 8 is a detailed flow one W eb capacity addition process in step S 1 1 2 of Figure 2 5.

Flow of FIG. 2 8, when you add a coordination center, the processing from Step S 1 5 4.

First, in step S 1 5 0, subtracting the currently assigned value from the predicted value, to determine the additional capacity amount. In step S 1 5 1, it is determined whether there is a spare servers in center. When Step S 1 5 1 of determination is NO, the process proceeds to Sutetsu flop S 1 5 4. When Step S 1 5 1 of determination is YES, in Step S 1 5 2, to select the additional servers in the center. Details of this processing is as shown in FIG. 2 0. Then, in step S 1 5 3, it is determined whether the amount additional capacity is satisfied. When Step S 1 5 3 determination is NO, the process proceeds to Sutetsu flop S 1 5 4. When Step S 1 5 3 determination is YES, the process proceeds to step S 1 5 8.

In step S 1 5 4, it is determined whether there is cooperation destination center with preliminary processing capability. Step If S 1 5 4 determination is YES, in Step S 1 5 6, performs allocation processing power in cooperation center. Details of this processing is as shown in FIG 1. In step S 1 5 7, the amount of additional processing power to determine Taka not been satisfied. Step If S 1 5 7 of the judgment is NO, the step S

Back to 1 5 4. Step S 1 5 7 of the decision is YES, the process proceeds to step S 1 5 8. Step If S 1 5 4 judgment is NO, and have contact to Step S 1 5 5, to warn the administrator that is impossible fulfillment of additional capacity amount, the process proceeds to step S 1 5 8.

In step S 1 5 8, set the VLAN to include the selected server, in step S 1 5 9, to configure the application to the selected server. Application settings are as shown in FIG 2. Step S

In 1 6 0, it is determined whether there is cooperation between centers. Step S 1 6 0 judgment results, if YES, the in step S 1 6 1, makes decisions and device settings coordination center load distribution ratio, in Step S 1 6 2, between its own center and collaboration center set the communication band, the process proceeds to step S 1 6 3.

Step If S 1 6 0 the judgment is NO, it proceeds continuously in Step S 1 6 3. In Step S 1 6 3, to determine the load distribution ratio of the own center, and device set, returns to the processing of FIG 5.

Figure 2 9 shows a detailed flow of step S 1 3 2 database capacity addition process in step S 1 1 7 and 2 6 in FIG 5.

In step S 170, by subtracting the current allocation values ​​from the predicted value, to determine the additional capacity amount. In step S 171, it is determined whether there is spare server in center. Step If the determination in S 1 71 is N_〇, in step S 177, calculates the We b ability possible from the current database, the stearyl-up S 178, adding the shortage of the We b capability coordination center to. Processing of Sutetsu-flops S 178 is as shown in FIG. 28. Next, the process returns to FIG. 25 or FIG 26.

Determination of step S 1 71 is in the case of YE S, in step S 1 72, to select additional servers in the center. Then, in step S 1 73, it is determined whether the amount additional capacity is satisfied. The determination in step S 1 73 is NO, the process proceeds to step S 1 77. Step If the determination in S 1 73 is YES, in Step S 1 74, and set the VLAN to include the selected server, in step S 1 75, and the database in the selected server, step S in 1 76, and updates the database list of We b server in center, returns to the processing of FIG. 25 or FIG 26.

Figure 30 is a flowchart showing details of the selection process of a common additional servers to We b server and database.

In step S 180, it is determined whether there is an application server required. When Step S 1 80 decision is YE S, in step S 1 81, it is determined whether there is a server capable satisfy the additional capacity amount one within the required application servers. Step If S 1 8 1 of determination is NO, and have contact to the step S 182, a application required, select the server up performance, the flow returns to step S 1 8 0. Step If the determination in S 1 81 is YE S is Oite to step S 1 83, to select the minimum performance servers in satisfiable server additional processing capacity amount one, step S 1 8 8 proceed to.

Step S 1 8 0 of determination in the case of N_〇, in step S 1 8 4, it is determined whether there is a server available for utilization. Step If S 1 8 4 determination is YES, in Step S 1 8 5, it is determined whether there is servers satisfiable additional capacity amount per unit. Step If S 1 8 5 determination is NO, the scan Tetsupu S 1 8 6, the process proceeds to step S 1 8 4 select the server up performance can be used. Step If S 1 8 5 determination is YES, in Step S 1 8 7, of the server that can satisfy the additional capacity amount one, step S 1 8 8 Select server of minimum performance proceed to. When Step S 1 8 4 determination is N_〇 directly proceeds to step S 1 8 8.

In step S 1 8 8, constitutes the assigned server list, have in Fig 2 8 returns to the process in FIG 9. Industrial Applicability

The present invention, each service, service quality is to cut with accomplished by dynamically allocating the time it becomes necessary without placed to ensure sufficient spare server for each data center. Further, even small data centers, it is possible also to guarantee the quality of service at the time of sudden load concentration by cooperation with the other data center. Furthermore, can reduce the capital investment by sharing the spare server, it is possible to effective use of the equipment at the same time.

Claims

The scope of the claims
1. A load balancing method for a client device example Bei multiple servers providing services via a network, the
To share the load on the server to provide normal service, the steps in the initial state providing a plurality of standby server that is not also set Izu Re of mono- bis, an increase in the load of the server that provides normal service expected in, and set the application for the service to be provide to the backup server, the providing server of the service, and a control step of sharing the server and load to provide normal service,
Method characterized in that comprises a.
2. A plurality of the devices are connected via a network, when one device has become not supported Ekire load has the other device, a server used the services required to provide normal Hisage the method according to claim 1, characterized in that provided for the single device.
3. The other device has a spare server, if the servers that provided for said one device no longer fully support the load, according to claim 2, characterized in that to provide the backup server the method according to.
4. When sharing the load among the plurality of apparatus, Method according to claim 2, characterized in that the securing of the communication bandwidth between the plurality of devices.
5. In the control step, by predicting the magnitude of the load from the number of processing after a predetermined time of the request of past server, and characterized by determining whether to use a standby server to provide the service the method of claim 1,.
6. If you use a spare server to a particular service, the method according to claim 1, based on the hardware characteristics of the backup server, characterized by the use of spare server suitable for providing the specific service .
7. If you use a spare server to a particular service, the method described in 請 Motomeko 1, characterized by using with priority from the backup server capable of replenishing the processing power to be supplemented by one.
8. The capacity to be filled out of the backup server capable supplemented with one method of claim 7, characterized in that it preferentially used from the backup server of minimum performance.
9. When using the call Pi server to a particular service, if there is no spare server capable of replenishing the processing power to be supplemented by one is characterized by the use of spare server up performance claims the method according to claim 1.
1 0. The control step, when the load is low etc. ho can support even without a spare server, from the preliminary servers that were used to provide reduced service load, the provision of the service the method of claim 1, remove the application, characterized by stopping the use of spare server for.
1 1. If you stop using the spare server The method of claim 1 0 in consideration of the characteristics of the backup server hardware, and wherein the stop using.
1 2. Claim when to discontinue use of the spare server, within which is continued to support the load of the particular service the remaining server and the backup server, which is characterized in that stop using spare server up performance the method according to 1 0.
1 3. An apparatus comprising a plurality of servers that provide services over the network to the client,
To share the load of the servers that provide normal service, and a plurality of spare server that is not also set Izu Re service in the initial state,
In anticipation of increase in the server load to provide normal service, and set the application for the service to be provide to the backup server, the providing server of the service, the server and load to provide a normal mono bis apparatus characterized by and a control means for sharing.
1 4. Client A load balancing method of a device having a plurality of servers providing services via a network,
To share the load of the servers that provide normal service, setting Izu Re service in the initial state have also been les, providing a plurality of standby server, an increase in the load of the server that provides normal service expected in, and set the application for the service to be provide to the backup server, the providing server of the service, and a control step of sharing the server and load to provide normal service,
Program for implementing the method characterized in the computer to be provided with.
PCT/JP2003/003273 2003-03-18 2003-03-18 Load distributing system by intersite cooperation WO2004084085A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2003/003273 WO2004084085A1 (en) 2003-03-18 2003-03-18 Load distributing system by intersite cooperation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2004569568A JPWO2004084085A1 (en) 2003-03-18 2003-03-18 Load distributed system according to the inter-site cooperation
PCT/JP2003/003273 WO2004084085A1 (en) 2003-03-18 2003-03-18 Load distributing system by intersite cooperation
US11/050,058 US20050144280A1 (en) 2003-03-18 2005-02-04 Load distribution system by inter-site cooperation

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/050,058 Continuation US20050144280A1 (en) 2003-03-18 2005-02-04 Load distribution system by inter-site cooperation

Publications (1)

Publication Number Publication Date
WO2004084085A1 true WO2004084085A1 (en) 2004-09-30

Family

ID=33018146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/003273 WO2004084085A1 (en) 2003-03-18 2003-03-18 Load distributing system by intersite cooperation

Country Status (2)

Country Link
JP (1) JPWO2004084085A1 (en)
WO (1) WO2004084085A1 (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006259793A (en) * 2005-03-15 2006-09-28 Hitachi Ltd Shared resource management method, and its implementation information processing system
JP2007114983A (en) * 2005-10-20 2007-05-10 Hitachi Ltd Server pool management method
JP2007264794A (en) * 2006-03-27 2007-10-11 Fujitsu Ltd Parallel distributed processing program and system
WO2008007435A1 (en) * 2006-07-13 2008-01-17 Fujitsu Limited Resource management program, resource management method, and resource management device
JPWO2006043321A1 (en) * 2004-10-20 2008-05-22 富士通株式会社 Application management program, application management method, and the application management device
JPWO2006043322A1 (en) * 2004-10-20 2008-05-22 富士通株式会社 Server manager, server management method, and a server management apparatus
US7693995B2 (en) 2005-11-09 2010-04-06 Hitachi, Ltd. Arbitration apparatus for allocating computer resource and arbitration method therefor
JP2010272090A (en) * 2009-05-25 2010-12-02 Hitachi Ltd Device, program and method for managing processing request destination
JP2011138202A (en) * 2009-12-25 2011-07-14 Fujitsu Ltd Server device, server load distribution device, server load distribution method, and program
JP2014502382A (en) * 2010-09-30 2014-01-30 エイ10 ネットワークス インコーポレイテッドA10 Networks, Inc. System and method to balance the server based on server load state
US9253152B1 (en) 2006-10-17 2016-02-02 A10 Networks, Inc. Applying a packet routing policy to an application session
US9270774B2 (en) 2011-10-24 2016-02-23 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9270705B1 (en) 2006-10-17 2016-02-23 A10 Networks, Inc. Applying security policy to an application session
JP2016045505A (en) * 2014-08-19 2016-04-04 日本電信電話株式会社 Service providing system and service providing method
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9602442B2 (en) 2012-07-05 2017-03-21 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9705800B2 (en) 2012-09-25 2017-07-11 A10 Networks, Inc. Load distribution in data networks
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US9979801B2 (en) 2011-12-23 2018-05-22 A10 Networks, Inc. Methods to manage services over a service gateway
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
WO2018097058A1 (en) * 2016-11-22 2018-05-31 日本電気株式会社 Analysis node, method for managing resources, and program recording medium
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
USRE47296E1 (en) 2006-02-21 2019-03-12 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09212468A (en) * 1996-02-02 1997-08-15 Fujitsu Ltd Compound mode multiprocessing system
JP2000298637A (en) * 1999-04-15 2000-10-24 Nec Software Kyushu Ltd System and method for load distribution and recording medium
EP1063831A2 (en) * 1999-06-24 2000-12-27 Canon Kabushiki Kaisha Network status server, information distribution system, control method, and storage medium for storing control program
JP2002163241A (en) * 2000-11-29 2002-06-07 Ntt Data Corp Client server system
JP2002259354A (en) * 2001-03-01 2002-09-13 Hitachi Ltd Network system and load distributing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09212468A (en) * 1996-02-02 1997-08-15 Fujitsu Ltd Compound mode multiprocessing system
JP2000298637A (en) * 1999-04-15 2000-10-24 Nec Software Kyushu Ltd System and method for load distribution and recording medium
EP1063831A2 (en) * 1999-06-24 2000-12-27 Canon Kabushiki Kaisha Network status server, information distribution system, control method, and storage medium for storing control program
JP2002163241A (en) * 2000-11-29 2002-06-07 Ntt Data Corp Client server system
JP2002259354A (en) * 2001-03-01 2002-09-13 Hitachi Ltd Network system and load distributing method

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4558740B2 (en) * 2004-10-20 2010-10-06 富士通株式会社 Application management program, application management method, and the application management device
JPWO2006043321A1 (en) * 2004-10-20 2008-05-22 富士通株式会社 Application management program, application management method, and the application management device
JPWO2006043322A1 (en) * 2004-10-20 2008-05-22 富士通株式会社 Server manager, server management method, and a server management apparatus
JP2006259793A (en) * 2005-03-15 2006-09-28 Hitachi Ltd Shared resource management method, and its implementation information processing system
JP4650203B2 (en) * 2005-10-20 2011-03-16 株式会社日立製作所 Information systems and management computer
US8769545B2 (en) 2005-10-20 2014-07-01 Hitachi, Ltd. Server pool management method
JP2007114983A (en) * 2005-10-20 2007-05-10 Hitachi Ltd Server pool management method
US7693995B2 (en) 2005-11-09 2010-04-06 Hitachi, Ltd. Arbitration apparatus for allocating computer resource and arbitration method therefor
USRE47296E1 (en) 2006-02-21 2019-03-12 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
JP2007264794A (en) * 2006-03-27 2007-10-11 Fujitsu Ltd Parallel distributed processing program and system
WO2008007435A1 (en) * 2006-07-13 2008-01-17 Fujitsu Limited Resource management program, resource management method, and resource management device
US9270705B1 (en) 2006-10-17 2016-02-23 A10 Networks, Inc. Applying security policy to an application session
US9253152B1 (en) 2006-10-17 2016-02-02 A10 Networks, Inc. Applying a packet routing policy to an application session
US9497201B2 (en) 2006-10-17 2016-11-15 A10 Networks, Inc. Applying security policy to an application session
JP2010272090A (en) * 2009-05-25 2010-12-02 Hitachi Ltd Device, program and method for managing processing request destination
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
JP2011138202A (en) * 2009-12-25 2011-07-14 Fujitsu Ltd Server device, server load distribution device, server load distribution method, and program
US9961135B2 (en) 2010-09-30 2018-05-01 A10 Networks, Inc. System and method to balance servers based on server load status
JP2014502382A (en) * 2010-09-30 2014-01-30 エイ10 ネットワークス インコーポレイテッドA10 Networks, Inc. System and method to balance the server based on server load state
US9961136B2 (en) 2010-12-02 2018-05-01 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US10178165B2 (en) 2010-12-02 2019-01-08 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9906591B2 (en) 2011-10-24 2018-02-27 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9270774B2 (en) 2011-10-24 2016-02-23 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
US9979801B2 (en) 2011-12-23 2018-05-22 A10 Networks, Inc. Methods to manage services over a service gateway
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US9602442B2 (en) 2012-07-05 2017-03-21 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US9705800B2 (en) 2012-09-25 2017-07-11 A10 Networks, Inc. Load distribution in data networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9544364B2 (en) 2012-12-06 2017-01-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US10257101B2 (en) 2014-03-31 2019-04-09 A10 Networks, Inc. Active application response delay time
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
JP2016045505A (en) * 2014-08-19 2016-04-04 日本電信電話株式会社 Service providing system and service providing method
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies
WO2018097058A1 (en) * 2016-11-22 2018-05-31 日本電気株式会社 Analysis node, method for managing resources, and program recording medium

Also Published As

Publication number Publication date
JPWO2004084085A1 (en) 2006-06-22

Similar Documents

Publication Publication Date Title
CA2372092C (en) A queuing model for a plurality of servers
US7720551B2 (en) Coordinating service performance and application placement management
EP0952741B1 (en) Method for resource allocation and routing in multi-service virtual private networks
US9218153B2 (en) Servicing a print request from a client system
JP4961833B2 (en) Cluster system, load balancing method, optimization client program, and arbitration server program
US7464160B2 (en) Provisioning grid services to maintain service level agreements
KR101096000B1 (en) Method For Managing Resources In A Platform For Telecommunication Service And/Or Network Management, Corresponding Platform And Computer Program Product Therefor
US7756989B2 (en) Method and apparatus for dynamically adjusting resources assigned to plurality of customers, for meeting service level agreements (SLAs) with minimal resources, and allowing common pools of resources to be used across plural customers on a demand basis
US6771595B1 (en) Apparatus and method for dynamic resource allocation in a network environment
US8799895B2 (en) Virtualization-based resource management apparatus and method and computing system for virtualization-based resource management
US5889956A (en) Hierarchical resource management with maximum allowable allocation boundaries
US20050102318A1 (en) Load simulation tool for server resource capacity planning
US7400632B2 (en) Adaptive bandwidth throttling for network services
JP2851432B2 (en) Non-hierarchical traffic routing method in a communication network
CA2192581C (en) Method and system for management of frequency spectrum among multiple applications on a shared medium
JP3879471B2 (en) Computer resource allocation method
US8346909B2 (en) Method for supporting transaction and parallel application workloads across multiple domains based on service level agreements
EP0782072B1 (en) File server load distribution system and method
US20050039183A1 (en) System and method for allocating a plurality of resources between a plurality of computing domains
KR100985619B1 (en) Apparatus, system, and method for on-demand control of grid system resources
JP3844932B2 (en) A recording medium recording a program for determining whether to be assigned to the server pool for the work type of server in the work processing facility, and the system
JP5557590B2 (en) Load balancer and systems
US8656404B2 (en) Statistical packing of resource requirements in data centers
JP2566728B2 (en) Logical path scheduling apparatus and running
US7712102B2 (en) System and method for dynamically configuring a plurality of load balancers in response to the analyzed performance data

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

WWE Wipo information: entry into national phase

Ref document number: 2004569568

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 11050058

Country of ref document: US