WO2004084085A1 - Systeme de distribution de charge par cooperation intersite - Google Patents

Systeme de distribution de charge par cooperation intersite Download PDF

Info

Publication number
WO2004084085A1
WO2004084085A1 PCT/JP2003/003273 JP0303273W WO2004084085A1 WO 2004084085 A1 WO2004084085 A1 WO 2004084085A1 JP 0303273 W JP0303273 W JP 0303273W WO 2004084085 A1 WO2004084085 A1 WO 2004084085A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
service
load
spare
servers
Prior art date
Application number
PCT/JP2003/003273
Other languages
English (en)
Japanese (ja)
Inventor
Tsutomu Kawai
Satoshi Tutiya
Yasuhiro Kokusho
Original Assignee
Fujitsu Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Limited filed Critical Fujitsu Limited
Priority to PCT/JP2003/003273 priority Critical patent/WO2004084085A1/fr
Priority to JP2004569568A priority patent/JPWO2004084085A1/ja
Publication of WO2004084085A1 publication Critical patent/WO2004084085A1/fr
Priority to US11/050,058 priority patent/US20050144280A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers

Definitions

  • the present invention relates to a load distribution system using inter-site cooperation.
  • Figure 1 shows an example of a conventional load distribution system.
  • the client 10 accesses the data center 12 via the network 11 and receives a service.
  • a plurality of servers 14 are connected to the load balancer 13.
  • a single server cannot handle the process, install multiple servers as shown in Fig. 1 and place a load balancer 13 in front of it to distribute the load to multiple servers and improve service quality.
  • additional determination of server 14 and server 14 load balancing In many cases, the work of adding the device 13 and changing the setting is performed manually, and it is necessary to secure a server at all times corresponding to the maximum load, resulting in a large cost.
  • Patent Document 1 defines a method of adding a server and distributing requests from users. However, it is necessary to incorporate a mechanism for server selection on the user side, which is suitable for application to an unspecified number of services. Not. In addition, there is a problem that it is necessary to exchange management information other than the request.
  • Patent Document 2 can be applied only to a case where static information is distributed, and cannot be applied to a case where different information is returned every time in response to a request from a user such as service provision.
  • Patent Document 3 also assumes the case of static information, and does not consider the case where the load on the file server or the like becomes excessive.
  • Patent Document 1
  • Patent Document 2
  • An object of the present invention is to provide a load distribution system capable of distributing a load for providing a service and flexibly responding to a change in a request from a user.
  • the method of the present invention is a method of distributing the load of an apparatus having a plurality of servers for providing a service to a client via a network.
  • an abbreviated setting for a service to be provided to the spare server is set, and the server for providing the service is provided.
  • the control step for sharing the load with the server that normally provides the service is performed. It is characterized by having.
  • a plurality of spare servers are provided in addition to a server for providing a normal service, and when a load on a server for providing a normal service increases, the spare server is provided. Then, install an abridge so that the service can be provided, and share the load of the server for providing the service.
  • devices equipped with spare servers are connected via a network, and control is performed so that spare servers are provided to each other. Even if it does not have enough processing power to support the service, multiple devices can cope with the load via the network and cope with the load, thereby avoiding interruption of service provision due to a large load. In addition, this allows the number of spare servers to be provided for one device to be reduced, eliminating the need for redundant hardware in each device.
  • Figure 1 shows an example of a conventional load distribution system.
  • FIG. 2 is a diagram showing a basic configuration of the embodiment of the present invention.
  • FIG. 3 is a diagram showing a network arrangement configuration in a center in the basic configuration of FIG.
  • FIG. 4 is a diagram showing a first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating the operation of the first exemplary embodiment of the present invention.
  • FIG. 6 is a diagram showing data for calculating the load and capacity of the server.
  • Figure 7 shows data for selecting a server according to the size of the load. is there.
  • FIG. 8 is a diagram showing the relationship between the capacity of the server to be added and the predicted value of the load.
  • FIG. 9 is a diagram showing a configuration in which a spare server is shared by a plurality of services.
  • FIG. 10 is a diagram showing a configuration in a case where a spare server is provided between different centers.
  • FIG. 11 is a diagram illustrating the operation of the embodiment of the present invention.
  • FIG. 12 is a diagram for explaining how to secure a network band when cooperating with another center.
  • FIG. 13 is a diagram illustrating an application example of the embodiment of the present invention in a web server.
  • FIG. 14 is a diagram showing an application example of the embodiment of the present invention in a web service.
  • FIG. 15 is an application example of the embodiment of the present invention in a case where equal centers mutually exchange resources. is there.
  • FIG. 16 is a diagram showing an example in which the embodiment of the present invention is applied to a pre-center having no spare server.
  • FIGS. 17 to 24 are flowcharts illustrating the operation of the embodiment of the present invention when there is no cooperation between databases provided in the center.
  • FIG. 25 to FIG. 30 are flowcharts showing the processing flow of the embodiment of the present invention when the database is linked.
  • FIG. 2 is a diagram showing a basic configuration of the embodiment of the present invention.
  • the client 10 accesses the web server 15-1 via the network 11 via the load distribution device 13_1 of the preceding center 12-1.
  • the client 10 accesses the database server 14-1 or the file server 14-12 to receive the service.
  • the rear-stage server 1 2-2 has almost the same configuration as the front-stage center 12-1, receives a request from the client 10 via the load balancer 13-1, and loads the load balancer 13-2
  • the client 10 is guided to the web server 15-2 while distributing the load with.
  • the client 10 accesses the database server 14-3 or 14_4 via the web server 15-2 to receive the service.
  • the first-stage center 12-1 indicates a center that directly receives a user's request
  • the second-stage center 12-2 indicates a center that processes a user request through the first-stage center 12-1.
  • the assignment of servers between data centers is a many-to-many relationship, such as when a certain data center uses servers from multiple data centers or when a certain data center responds to server requests from multiple data centers simultaneously. is there.
  • Server load status ⁇ Client load status is determined by the system controller 1 6-1,
  • 16-2 performs the tallying / judgment 'application, and sets the results in the servers 141-14-14-14 and the load balancers 13-1 and 13-2. If server resources are insufficient, set servers 17_1 and 17-2 in the spare server as servers with necessary functions, add them to the service, and improve their performance.
  • FIG. 3 is a diagram showing a network arrangement configuration in a center in the basic configuration of FIG.
  • the physical network configuration consists of connecting all the servers directly under a single switch group 20, and having a logically independent network (VLAN0, VLAN11, VLAN12, VLAN21). With such an arrangement, it becomes possible to automate the process of adding servers to the required locations.
  • the server capacity is derived from server specifications such as CPU performance / network configuration, and necessary servers are calculated even in an environment where various types of hardware are mixed. Assign servers appropriately. At the same time, it calculates the traffic to that server and secures or arbitrates the network bandwidth.
  • servers will be added before an overload occurs, and service quality will be guaranteed.
  • FIG. 4 is a diagram showing a first embodiment of the present invention.
  • the system control unit 16 measures the load status of the server, and if it is judged that the current number of servers will cause a problem, a server is added from the spare server 17 and applications, services, Set and introduce data to be used. Then, the settings of the dependent devices and servers are updated and incorporated into the service.
  • FIG. 5 is a diagram illustrating the operation of the first exemplary embodiment of the present invention.
  • FIG. 6 is a diagram showing data for calculating the load and capacity of the server.
  • information is required on how many service capabilities a given server provides.
  • the service capacity per unit changes depending on the combination of servers and devices used, and applications and services. It is practically impossible to use uniform servers when multiple data centers cooperate.Therefore, it is necessary to calculate service capabilities from equipment specifications such as CPU and memory. is there. Therefore, from the performance value in a typical configuration,
  • a method of estimating a performance value in consideration of a difference in CPU capability and the like is used.
  • Figure 7 is a diagram showing data for selecting a server according to the size of the load.
  • the server with the higher recommendation is preferentially selected and used until the required amount is satisfied.
  • FIG. 8 is a diagram showing the relationship between the capacity of the server to be added and the predicted value of the load.
  • FIG. 9 is a diagram showing a configuration in which a spare server is shared by a plurality of services.
  • the service center 1 and service 2 are equipped with the service 1 and the service 2, respectively, and the load balancers 13-1 and 13-2 are provided respectively.
  • Service 1 has a Web server 15-1, a database server 14-1, and a file server 14-2.
  • the service 2 is provided with a server 25.
  • the spare server 17 is provided in common for service 1 and service 2.
  • the system controller 16 checks the load status and installs additional servers from spare server 17 to service 1 or service 2. I do.
  • FIG. 10 is a diagram showing a configuration in a case where a spare server is provided between different centers.
  • the center 12-2 is a post-center and its spare server 17-2 is used via the network.
  • FIG. 11 is a diagram illustrating the operation of the embodiment of the present invention.
  • Some services require servers that cooperate with each other, such as databases, in addition to servers that directly exchange information with users.
  • the performance cannot be improved unless the processing capacity and load status are checked for each function and a server is added to an appropriate function. For this reason, the system controller 16 checks the load for each layer, and when adding or deleting, changes the setting of the linked server to increase or decrease the capacity.
  • FIG. 12 is a diagram for explaining how to secure a network band when cooperating with another center.
  • the load from the user and the status of the server capacity can be monitored, and the necessary and sufficient resources can be allocated from the data center or the linked data center before the load exceeds the server capacity. Therefore, it is possible to guarantee service quality for requests from users. Since the required spare servers can be shared widely over a wide area, the total number of required servers can be reduced as a whole.
  • a service in which servers with multiple functions cooperate Server it is possible to add a server to the function that is the bottleneck, so it is possible to achieve a sufficiently large scale.
  • the entire process can be automated, it can quickly follow changes in the amount requested by the user.
  • FIG. 13 is a diagram illustrating an application example of the embodiment of the present invention in a web server.
  • the same components as those in FIG. 12 are denoted by the same reference numerals, and description thereof will be omitted.
  • FIG. 14 is a diagram showing an application example of the embodiment of the present invention in a web service.
  • the web service is composed of a combination of a web server 15-1, a database server 14-1, and a file server 14_2.
  • the web service is composed of a combination of a web server 15-1, a database server 14-1, and a file server 14_2.
  • the web service is composed of a combination of a web server 15-1, a database server 14-1, and a file server 14_2.
  • the web service is composed of a combination of a web server 15-1, a database server 14-1, and a file server 14_2.
  • the database server 14-1 synchronizes data even during cooperation between the preceding center 12_1 and the following center 12-2. This is realized by creating a V lan across the centers and securing the bandwidth.
  • FIG. 15 shows an application example of the embodiment of the present invention when equal centers mutually exchange resources.
  • the processing capacity of service 1 in center 1 is less than spare server 3 0—1 in center 1.
  • request cooperation from Center 2 and use the servers in Center 2 (shaded area and spare server 30-3).
  • server capacity in the center 2 is also dead (when the capacity including the spare server 30-2 is dead)
  • another center 3 is requested to cooperate, and the server in the center 3 is shaded (shaded). Partial and spare servers 30-3) are used.
  • FIG. 16 is a diagram showing an example in which the embodiment of the present invention is applied to a pre-center having no spare server.
  • the system control unit 16-1 determines that there is not enough server for service provision in the first center 12-1, the second center 12-2 is requested to cooperate, and the second center 12-2 is requested. Use the server in 2.
  • a load balancer and a Web server are provided for service 1 and service 2.
  • the service 1 and service 2 servers provide service 1 and service 2, respectively.
  • the spare server 17 is added as needed for each service.
  • the system control unit 16 _ 2 determines the addition and cooperates with the preceding center 12-1.
  • FIGS. 17 to 24 are flowcharts for explaining the operation of the embodiment of the present invention in the case where the databases provided in the center are not linked.
  • FIG. 17 is a flowchart showing the overall flow of the system control device.
  • load measurement is performed.
  • step S11 it is determined whether the predicted processing capacity exceeds the allocated processing capacity. If the determination in step S11 is YES, in step S12, the processing capacity is added, and the process proceeds to step S15.
  • step S15 the wait is 10 seconds. Force This value should be set by the designer as appropriate.
  • step S13 it is determined whether the current processing capacity is less than half of the allocated processing capacity. If the determination in step S13 is YES, in step S14, the processing capacity is reduced, and the process proceeds to step S15. If the determination in step S13 is NO, the process proceeds to step S15.
  • step S15 the process returns to step S10 again.
  • FIG. 18 is a diagram showing the details of the load measurement in step S10 of FIG.
  • step S20 the average number of processes for 10 seconds is collected from the server in use. This 10 seconds should match the value of step S15 in FIG.
  • step S21 the total average number of processes is calculated and added to the measurement history.
  • step S22 it is determined whether there are four or more measurement histories. If the determination in step S22 is NO, in step S23, the latest history is set as a predicted value 30 seconds later, and the process proceeds to step S25. If the determination in step S22 is YES, in step S24, a predicted value 30 seconds later is calculated from the last four histories by least squares approximation, and the process proceeds to step S25.
  • step S25 a predicted value after 30 seconds is set.
  • step S26 the latest history is set to the current value, and the process returns to the flow in FIG.
  • FIG. 19 is a diagram showing the details of the processing capacity adding process in step S12 of FIG.
  • step S30 the current processing value is subtracted from the predicted value to determine the additional processing capacity.
  • step S31 it is determined whether there is a spare server in the center. If the determination in step S31 is YES, in step S32, an additional server in the center is selected.
  • step S33 it is determined whether or not the additional processing capacity has been satisfied. If the determination in step S33 is N ⁇ , the flow proceeds to step S34, and if the determination is YES, the flow proceeds to step S38. If the determination in step S31 is NO, the process proceeds to step S34. At 34, it is determined whether or not there is a partner center having a preliminary processing capacity. If the determination in step S34 is YES, in step S36, the coordination center allocates processing capacity.
  • step S37 it is determined whether or not the additional processing capacity has been satisfied. If the determination in step S37 is NO, the process proceeds to step S34. If the determination in step S37 is YES, the process proceeds to step S38. If the determination in step S34 is NO, in step S35, the administrator is warned that the additional processing capacity cannot be satisfied, and the process proceeds to step S38. In step S38, a VLAN is set so as to include the selected server. In step S39, an application is set for the selected server, and the process proceeds to step S40.
  • step S40 it is determined whether or not there is cooperation between the centers. If the determination is NO, the process proceeds to step S43. If the determination in step S40 is YES, in step S41, the coordination center load distribution ratio is determined and assigned, and the equipment is set. In step S42, the own center and the coordination center are connected. Set the communication band between them, and proceed to step S43. In step S43, the load distribution ratio of the own center is determined, the allocating device is determined, and the process returns to the flow of FIG.
  • FIG. 20 is a flow showing in detail the process of selecting an additional server in step S32 of FIG.
  • step S50 it is determined whether there is a server for a necessary use. If the determination in step S50 is NO, the process proceeds to step S54. If the determination in step S50 is YES, in step S51, it is determined whether or not there is a server capable of satisfying the additional processing capacity with a single server for the required application. If the determination in step S51 is NO, in step S52, the server with the highest performance for the required application is selected, and the process returns to step S50. Steps If the determination in S51 is YES, the server with the lowest performance is selected from among the servers for the required applications, which can provide the additional processing capacity, and the process proceeds to step S58. .
  • step S54 it is determined whether there is an available server. If the determination in step S54 is NO, the process proceeds to step S58. If the determination in step S54 is YES, in step S55, it is determined whether one server can satisfy the additional processing capacity. If the judgment in step S55 is NO, in step S56, the server with the highest performance is selected, and the process returns to step S54. If the determination in step S55 is YES, in step S57, the server with the lowest performance is selected from the servers that can satisfy the additional processing capacity with one, and the process proceeds to step S58. move on. In step S58, a list of assigned servers is configured, and the process returns to the process in FIG.
  • FIG. 21 is a flowchart showing the flow of the coordination center processing capacity assignment processing in step S36 in FIG.
  • step S60 it is determined whether or not the processing capacity upper limit based on the bandwidth is smaller than the desired allocation value. If the determination in step S60 is NO, the process proceeds to step S62. If the determination in step S60 is YES, in step S61, the upper limit of the quota is set as the upper limit of the bandwidth, and the process proceeds to step S62.
  • step S62 a request is made to the partner center to select a server.
  • step S63 an additional server is selected in the partner center, and in step S64, a list of assigned servers is configured. Then, the processing returns to the processing in FIG.
  • FIG. 22 is a detailed flow of the application setting in step S39 in FIG.
  • step S70 it is determined whether or not there is cooperation between the centers. If the determination in step S70 is NO, the process proceeds to step S74.
  • step S 70 If the determination is YES, in step S71, it is determined whether or not the application live has been transferred. If the determination in step S71 is YES, the process proceeds to step S73. If the determination in step S71 is NO, in step S72, the application archive is transferred to the partner center, and the process proceeds to step S73. In step S73, the application is installed on the additional server, and the process proceeds to step S74. In step S74, the application is installed on the additional server in the own center, and the process returns to the process in FIG.
  • FIG. 23 is a flowchart showing the processing for reducing the processing capacity in step S14 of FIG.
  • step S80 the current measured value is subtracted from the assigned value to determine the reduction processing capacity.
  • step S81 it is determined whether there is a cooperation center. If the determination in step S81 is YES, in step S82, a reduction server is determined in the cooperation center, and in step S83, it is determined whether all servers in the cooperation center have been reduced. If the determination in step S83 is YES, the process returns to step S81. If the determination in step S83 is NO, the process proceeds to step S85. If the determination in step S81 is NO, in step S84, the own server determines the reduction server, and the process proceeds to step S85.
  • step S85 the load distribution ratio of the own center is determined, and the allocation device is set.
  • step S86 the load distribution ratio of the cooperation center is determined, and the assignment device is set.
  • step S87 the process waits for completion of the user request process.
  • step S88 the application is deleted from the reduction server, and in step S89, a VLAN is set so as to include only the remaining servers (coordination network communication path is set).
  • step S90 It is determined whether or not the cooperation is released. If the determination in step S90 is YES, in step S91, the bandwidths of the own center and the cooperation center are released and Return to the processing of step 7. When the determination in step S90 is NO, the process returns to the process in FIG.
  • FIG. 24 is a flowchart showing the selection processing of the reduction server in step S82 or step S84 in FIG.
  • step S100 it is determined whether there is a server available for another use. If the determination in step S100 is NO, the process proceeds to step S103. If the determination in step S100 is YES, in step S101, it is determined whether there is a server whose performance is lower than the remaining reduction capacity. If the determination in step S101 is NO, the process proceeds to step S103. If the determination in step S101 is YES, in step S102, among the servers whose performance is lower than the remaining reduction performance, the server with the highest performance is reduced, and the process proceeds to step S100.
  • step S103 it is determined whether there is a server currently in use. If the determination in step S103 is NO, the process proceeds to step S106. If the determination in step S103 is YES, in step S104, it is determined whether there is a server whose performance is lower than the remaining reduction performance. If the determination in step S104 is NO, the process proceeds to step S106. If the determination in step S104 is YES, in step S105, the server with the highest performance among the servers having lower performance than the remaining reduction performance is reduced, and the process returns to step S103.
  • step S106 a list of the deleted servers is generated, and the process returns to the process in FIG.
  • FIG. 25 to FIG. 30 are flowcharts showing the processing flow of the embodiment of the present invention when there is cooperation of databases.
  • FIG. 25 is a flowchart showing the flow of the overall processing of the own center that performs the cooperation request.
  • step S110 the load of the Web server is measured.
  • step S111 it is determined whether the predicted processing capacity is larger than the allocated processing capacity. If the determination in step S111 is YE S, in step S112, Web processing capacity is added, and the process proceeds to step S115. If the determination in step S111 is NO, in step S113, it is determined whether the current processing capacity is smaller than one half of the allocated processing capacity. If the judgment in step S113 is N ⁇ , the process proceeds to step S115. If the determination in step S113 is YES, in step S114, the capacity of the Web processing is reduced, and the process proceeds to step S115.
  • step S115 the load on the database in the center is measured.
  • step S116 it is determined whether the predicted processing capacity is larger than the allocated processing capacity. If the determination in step S116 is YES, in step S117, the database processing capacity is added, and the flow advances to step S120. If the determination in step S116 is NO, in step S118, it is determined whether the current processing capacity is smaller than one half of the allocated processing capacity. If the determination in step S118 is NO, the process proceeds to step S120. If the determination in step S118 is YES, in step S119, the processing capacity of the database is reduced, and the flow advances to step S120. In step S120, the process waits for 10 seconds. This waiting time should be set as appropriate by the designer. After step S120, again go to step S110
  • Fig. 26 is a flow chart showing the overall processing flow of the partner center.
  • step S130 the database load in the center is measured.
  • step S131 it is determined whether the predicted processing capacity is larger than the allocated processing capacity. If the determination in step S 13 1 is YES, in step S 13 2, the database processing capacity is added, and the flow advances to step S 1 35. If the determination in step S133 is NO, in step S133, it is determined whether the current processing capacity is smaller than one half of the allocated processing capacity. If the determination in step S133 is NO, the process proceeds to step S135. Step S 1 3 3 If the determination is YES, the database processing capacity is reduced in step S134, and the flow advances to step S135. In step S135, the process waits for 10 seconds, and returns to step S130. This 10 seconds should not be limited to this, but should be set appropriately by the designer.
  • Figure 27 is a flowchart showing the detailed processing of web load measurement or database load measurement performed at each center.
  • step S140 the average number of processes for 10 seconds from the server in use is collected. This 10 seconds should be the same value as the waiting time of step S120 of FIG. 25 and step S135 of FIG.
  • step S141 the total average number of processes is calculated and added to the measurement history.
  • step S142 it is determined whether there are four or more measurement histories. If the determination in step S142 is NO, in step S143, the latest history is set as a predicted value 30 seconds later, and the flow advances to step S145. If the determination in step S144 is YES, in step S144, a predicted value 30 seconds later is derived from the latest four histories by least squares approximation, and the process proceeds to step S145. . This derivation method is as described in Fig. 18.
  • step S145 a predicted value after 30 seconds is set.
  • step S146 the latest history is set to the current value, and the process returns to the processing in FIGS.
  • FIG. 28 is a detailed flowchart of the Web processing capability addition process in step S112 of FIG.
  • step S154 when a coordination center is added, the processing from step S154 is performed.
  • step S150 the current assigned value is subtracted from the predicted value to determine an additional processing capacity.
  • step S151 it is determined whether or not there is a spare server in the center. If the determination in step S151 is NO, the process proceeds to step S154. If the judgment in step S15 is YES, step S15 In step 2, an additional server in the center is selected. The details of this processing are as shown in FIG. Then, in step S153, it is determined whether or not the additional processing capacity has been satisfied. If the determination in step S 153 is NO, the process proceeds to step S 154. If the determination in step S155 is YES, the process proceeds to step S158.
  • step S154 it is determined whether or not there is a partner center having a preliminary processing capability. If the determination in step S154 is YES, in step S156, the cooperation center allocates processing capacity. Details of this processing are as shown in FIG. In step S157, it is determined whether or not the additional processing capacity has been satisfied. If the judgment in step S157 is NO, step S
  • step S157 determines whether the additional processing capacity cannot be satisfied. If the determination in step S157 is YES, the process proceeds to step S158. If the determination in step S155 is NO, in step S155, the administrator is warned that the additional processing capacity cannot be satisfied, and the process proceeds to step S158.
  • step S158 the VLAN is set to include the selected server
  • step S159 the application is set to the selected server.
  • the application settings are as shown in Figure 22.
  • step S160 it is determined whether or not there is cooperation between the centers. If the result of determination in step S160 is YES, in step S161, the coordination center load distribution ratio is determined and the equipment is set, and in step S166, the own center and the coordination center are Set the communication band and proceed to step S163.
  • step S166 If the determination in step S166 is NO, the process proceeds directly to step S166. In step S163, the load distribution ratio of the own center is determined, the device is set, and the process returns to the process in FIG.
  • Fig. 29 shows the data of step S117 of Fig. 25 and the data of step S132 of Fig. 26. It is a detailed flow of database processing capacity addition processing.
  • step S170 the current assigned value is subtracted from the predicted value to determine an additional processing capacity.
  • step S171 it is determined whether or not there is a spare server in the center. If the judgment in step S 171 is N ⁇ ⁇ ⁇ ⁇ , in step S 177 the possible web capacity is calculated from the current database, and in step S 178 the insufficient web capacity is added by the cooperation center. I do. The process in step S178 is as shown in FIG. Then, the processing returns to the processing of FIG. 25 or FIG.
  • step S 171 determines whether or not the additional server in the center is selected in step S 172. Then, in a step S173, it is determined whether or not the additional processing capacity is satisfied. If the determination in step S 173 is NO, the process proceeds to step S 177. If the determination in step S 173 is YES, in step S 174, a VLAN is set to include the selected server, and in step S 175, a database is set for the selected server, In 176, the database list of the Web server in the center is updated, and the processing returns to the processing in FIG. 25 or FIG.
  • FIG. 30 is a flowchart showing details of the process of selecting an additional server common to the web server and the database.
  • step S180 it is determined whether there is a required application server. If the determination in step S 180 is YES, in step S 181, it is determined whether or not there is a server capable of satisfying the additional processing capacity by a single server for the required application. If the determination in step S181 is NO, in step S182, a server for the required use and having the highest performance is selected, and the process returns to step S180. If the determination in step S 181 is YES, in step S 183, the server with the lowest performance among the servers that can satisfy the additional processing capacity with one Select and go to step S188.
  • step S184 it is determined whether there is an available server. If the determination in step S184 is YES, in step S185, it is determined whether or not there is a server that can satisfy the additional processing capacity with one unit. If the determination in step S185 is NO, in step S186, the server with the maximum performance that can be used is selected, and the flow advances to step S184. If the determination in step S185 is YES, in step S187, the server with the lowest performance is selected from the servers capable of satisfying the additional processing capacity with one server, and the process proceeds to step S188. Proceed to. If the judgment in step S188 is N ⁇ , the process directly proceeds to step S188.
  • step S188 a list of assigned servers is constructed, and the process returns to FIG. 28 or FIG. 29.
  • the service quality can be achieved by dynamically allocating the server when it becomes necessary without securing and keeping a sufficient spare server for each service and each data center.
  • the service quality can be achieved by dynamically allocating the server when it becomes necessary without securing and keeping a sufficient spare server for each service and each data center.
  • capital investment can be reduced by sharing the spare server, and at the same time, the equipment can be used effectively.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

L'invention concerne un système comprenant un centre frontal (12-1) permettant de recevoir directement une demande provenant d'un client (10) au moyen d'un réseau (11) et un centre arrière (12-2) permettant de recevoir la demande provenant de client (10) au moyen du centre frontal (12-1). Les centres présentent des serveurs auxiliaires (17-1, 17-2). Le centre (12-1) fournit un service en utilisant un serveur normal. Un contrôleur de système (16-1) fournit, après détection de l'augmentation de la charge sur le serveur, un serveur prêtant un service, dont la charge augmente depuis le serveur auxiliaire (17-1) recevant un service 1 et un service 2. Si la charge ne peut être prise en charge par le serveur, le contrôleur de système (16-1) émet une instruction à un contrôleur de système (16-2) du centre frontal (12-1) afin de prendre en charge le service. Lorsque le contrôleur arrière (12-2) ne peut prendre en charge la charge au moyen d'un serveur normal, et il prend en charge la charge en utilisant le serveur auxiliaire (17-2).
PCT/JP2003/003273 2003-03-18 2003-03-18 Systeme de distribution de charge par cooperation intersite WO2004084085A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2003/003273 WO2004084085A1 (fr) 2003-03-18 2003-03-18 Systeme de distribution de charge par cooperation intersite
JP2004569568A JPWO2004084085A1 (ja) 2003-03-18 2003-03-18 サイト間連携による負荷分散システム
US11/050,058 US20050144280A1 (en) 2003-03-18 2005-02-04 Load distribution system by inter-site cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2003/003273 WO2004084085A1 (fr) 2003-03-18 2003-03-18 Systeme de distribution de charge par cooperation intersite

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/050,058 Continuation US20050144280A1 (en) 2003-03-18 2005-02-04 Load distribution system by inter-site cooperation

Publications (1)

Publication Number Publication Date
WO2004084085A1 true WO2004084085A1 (fr) 2004-09-30

Family

ID=33018146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/003273 WO2004084085A1 (fr) 2003-03-18 2003-03-18 Systeme de distribution de charge par cooperation intersite

Country Status (2)

Country Link
JP (1) JPWO2004084085A1 (fr)
WO (1) WO2004084085A1 (fr)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006259793A (ja) * 2005-03-15 2006-09-28 Hitachi Ltd 共用リソース管理方法およびその実施情報処理システム
JP2007114983A (ja) * 2005-10-20 2007-05-10 Hitachi Ltd サーバプール管理方法
JP2007264794A (ja) * 2006-03-27 2007-10-11 Fujitsu Ltd 並列分散処理プログラム及び並列分散処理システム
WO2008007435A1 (fr) * 2006-07-13 2008-01-17 Fujitsu Limited Programme de gestion de ressources, procédé de gestion de ressources et dispositif de gestion de ressources
JPWO2006043322A1 (ja) * 2004-10-20 2008-05-22 富士通株式会社 サーバ管理プログラム、サーバ管理方法、およびサーバ管理装置
JPWO2006043321A1 (ja) * 2004-10-20 2008-05-22 富士通株式会社 アプリケーション管理プログラム、アプリケーション管理方法、およびアプリケーション管理装置
US7693995B2 (en) 2005-11-09 2010-04-06 Hitachi, Ltd. Arbitration apparatus for allocating computer resource and arbitration method therefor
JP2010272090A (ja) * 2009-05-25 2010-12-02 Hitachi Ltd 処理依頼先管理装置、処理依頼先管理プログラムおよび処理依頼先管理方法
JP2011138202A (ja) * 2009-12-25 2011-07-14 Fujitsu Ltd サーバ装置、サーバ負荷分散装置、サーバ負荷分散方法、及びプログラム
JP2014502382A (ja) * 2010-09-30 2014-01-30 エイ10 ネットワークス インコーポレイテッド サーバ負荷状態に基づきサーバをバランスさせるシステムと方法
US9253152B1 (en) 2006-10-17 2016-02-02 A10 Networks, Inc. Applying a packet routing policy to an application session
US9270774B2 (en) 2011-10-24 2016-02-23 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9270705B1 (en) 2006-10-17 2016-02-23 A10 Networks, Inc. Applying security policy to an application session
JP2016045505A (ja) * 2014-08-19 2016-04-04 日本電信電話株式会社 サービス提供システム、及びサービス提供方法
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9602442B2 (en) 2012-07-05 2017-03-21 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9705800B2 (en) 2012-09-25 2017-07-11 A10 Networks, Inc. Load distribution in data networks
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US9979801B2 (en) 2011-12-23 2018-05-22 A10 Networks, Inc. Methods to manage services over a service gateway
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
WO2018097058A1 (fr) * 2016-11-22 2018-05-31 日本電気株式会社 Nœud d'analyse, procédé de gestion de ressources, et support d'enregistrement de programme
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
USRE47296E1 (en) 2006-02-21 2019-03-12 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies
US10581976B2 (en) 2015-08-12 2020-03-03 A10 Networks, Inc. Transmission control of protocol state exchange for dynamic stateful service insertion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09212468A (ja) * 1996-02-02 1997-08-15 Fujitsu Ltd 複合計算機システム
JP2000298637A (ja) * 1999-04-15 2000-10-24 Nec Software Kyushu Ltd 負荷分散システム、負荷分散方法、および記録媒体
EP1063831A2 (fr) * 1999-06-24 2000-12-27 Canon Kabushiki Kaisha Serveur d'état de réseau, système de distribution d'information, méthode de contrôle, et support d'enregistrement pour enregister un programme de contrôle
JP2002163241A (ja) * 2000-11-29 2002-06-07 Ntt Data Corp クライアントサーバシステム
JP2002259354A (ja) * 2001-03-01 2002-09-13 Hitachi Ltd ネットワークシステム及び負荷分散方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09212468A (ja) * 1996-02-02 1997-08-15 Fujitsu Ltd 複合計算機システム
JP2000298637A (ja) * 1999-04-15 2000-10-24 Nec Software Kyushu Ltd 負荷分散システム、負荷分散方法、および記録媒体
EP1063831A2 (fr) * 1999-06-24 2000-12-27 Canon Kabushiki Kaisha Serveur d'état de réseau, système de distribution d'information, méthode de contrôle, et support d'enregistrement pour enregister un programme de contrôle
JP2002163241A (ja) * 2000-11-29 2002-06-07 Ntt Data Corp クライアントサーバシステム
JP2002259354A (ja) * 2001-03-01 2002-09-13 Hitachi Ltd ネットワークシステム及び負荷分散方法

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2006043322A1 (ja) * 2004-10-20 2008-05-22 富士通株式会社 サーバ管理プログラム、サーバ管理方法、およびサーバ管理装置
JP4558740B2 (ja) * 2004-10-20 2010-10-06 富士通株式会社 アプリケーション管理プログラム、アプリケーション管理方法、およびアプリケーション管理装置
JPWO2006043321A1 (ja) * 2004-10-20 2008-05-22 富士通株式会社 アプリケーション管理プログラム、アプリケーション管理方法、およびアプリケーション管理装置
JP2006259793A (ja) * 2005-03-15 2006-09-28 Hitachi Ltd 共用リソース管理方法およびその実施情報処理システム
US8769545B2 (en) 2005-10-20 2014-07-01 Hitachi, Ltd. Server pool management method
JP4650203B2 (ja) * 2005-10-20 2011-03-16 株式会社日立製作所 情報システム及び管理計算機
JP2007114983A (ja) * 2005-10-20 2007-05-10 Hitachi Ltd サーバプール管理方法
US7693995B2 (en) 2005-11-09 2010-04-06 Hitachi, Ltd. Arbitration apparatus for allocating computer resource and arbitration method therefor
USRE47296E1 (en) 2006-02-21 2019-03-12 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
JP2007264794A (ja) * 2006-03-27 2007-10-11 Fujitsu Ltd 並列分散処理プログラム及び並列分散処理システム
WO2008007435A1 (fr) * 2006-07-13 2008-01-17 Fujitsu Limited Programme de gestion de ressources, procédé de gestion de ressources et dispositif de gestion de ressources
US9270705B1 (en) 2006-10-17 2016-02-23 A10 Networks, Inc. Applying security policy to an application session
US9497201B2 (en) 2006-10-17 2016-11-15 A10 Networks, Inc. Applying security policy to an application session
US9253152B1 (en) 2006-10-17 2016-02-02 A10 Networks, Inc. Applying a packet routing policy to an application session
JP2010272090A (ja) * 2009-05-25 2010-12-02 Hitachi Ltd 処理依頼先管理装置、処理依頼先管理プログラムおよび処理依頼先管理方法
US10735267B2 (en) 2009-10-21 2020-08-04 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
JP2011138202A (ja) * 2009-12-25 2011-07-14 Fujitsu Ltd サーバ装置、サーバ負荷分散装置、サーバ負荷分散方法、及びプログラム
US10447775B2 (en) 2010-09-30 2019-10-15 A10 Networks, Inc. System and method to balance servers based on server load status
US9961135B2 (en) 2010-09-30 2018-05-01 A10 Networks, Inc. System and method to balance servers based on server load status
JP2014502382A (ja) * 2010-09-30 2014-01-30 エイ10 ネットワークス インコーポレイテッド サーバ負荷状態に基づきサーバをバランスさせるシステムと方法
US10178165B2 (en) 2010-12-02 2019-01-08 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9961136B2 (en) 2010-12-02 2018-05-01 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US9270774B2 (en) 2011-10-24 2016-02-23 A10 Networks, Inc. Combining stateless and stateful server load balancing
US10484465B2 (en) 2011-10-24 2019-11-19 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9906591B2 (en) 2011-10-24 2018-02-27 A10 Networks, Inc. Combining stateless and stateful server load balancing
US9386088B2 (en) 2011-11-29 2016-07-05 A10 Networks, Inc. Accelerating service processing using fast path TCP
US9979801B2 (en) 2011-12-23 2018-05-22 A10 Networks, Inc. Methods to manage services over a service gateway
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US9602442B2 (en) 2012-07-05 2017-03-21 A10 Networks, Inc. Allocating buffer for TCP proxy session based on dynamic network conditions
US9705800B2 (en) 2012-09-25 2017-07-11 A10 Networks, Inc. Load distribution in data networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10862955B2 (en) 2012-09-25 2020-12-08 A10 Networks, Inc. Distributing service sessions
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US10516577B2 (en) 2012-09-25 2019-12-24 A10 Networks, Inc. Graceful scaling in software driven networks
US10491523B2 (en) 2012-09-25 2019-11-26 A10 Networks, Inc. Load distribution in data networks
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
US9544364B2 (en) 2012-12-06 2017-01-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9338225B2 (en) 2012-12-06 2016-05-10 A10 Networks, Inc. Forwarding policies on a virtual service network
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US11005762B2 (en) 2013-03-08 2021-05-11 A10 Networks, Inc. Application delivery controller and global server load balancer
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US10659354B2 (en) 2013-03-15 2020-05-19 A10 Networks, Inc. Processing data packets using a policy based network path
US10027761B2 (en) 2013-05-03 2018-07-17 A10 Networks, Inc. Facilitating a secure 3 party network session by a network device
US10038693B2 (en) 2013-05-03 2018-07-31 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10305904B2 (en) 2013-05-03 2019-05-28 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US10230770B2 (en) 2013-12-02 2019-03-12 A10 Networks, Inc. Network proxy layer for policy-based application proxies
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US10257101B2 (en) 2014-03-31 2019-04-09 A10 Networks, Inc. Active application response delay time
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US10686683B2 (en) 2014-05-16 2020-06-16 A10 Networks, Inc. Distributed system to determine a server's health
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US10749904B2 (en) 2014-06-03 2020-08-18 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US10880400B2 (en) 2014-06-03 2020-12-29 A10 Networks, Inc. Programming a data network device using user defined scripts
JP2016045505A (ja) * 2014-08-19 2016-04-04 日本電信電話株式会社 サービス提供システム、及びサービス提供方法
US10581976B2 (en) 2015-08-12 2020-03-03 A10 Networks, Inc. Transmission control of protocol state exchange for dynamic stateful service insertion
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies
WO2018097058A1 (fr) * 2016-11-22 2018-05-31 日本電気株式会社 Nœud d'analyse, procédé de gestion de ressources, et support d'enregistrement de programme

Also Published As

Publication number Publication date
JPWO2004084085A1 (ja) 2006-06-22

Similar Documents

Publication Publication Date Title
WO2004084085A1 (fr) Systeme de distribution de charge par cooperation intersite
CN111818159B (zh) 数据处理节点的管理方法、装置、设备及存储介质
CN104769919B (zh) 对复制型数据库的访问进行负载平衡
JP5039951B2 (ja) ストレージ・デバイス・ポートの選択の最適化
JP4827097B2 (ja) グリッド・システム資源をオンデマンドで制御する装置、システム及び方法
US5341477A (en) Broker for computer network server selection
CN110166524B (zh) 数据中心的切换方法、装置、设备及存储介质
US20110271275A1 (en) Software distribution management method of computer system and computer system for software distribution management
US20200050479A1 (en) Blockchain network and task scheduling method therefor
CN112583861A (zh) 服务部署方法、资源配置方法、系统、装置及服务器
JPH03116262A (ja) コンピュータネットワークにおけるサーバを選択するための方法及び装置
US10541901B2 (en) Methods, systems and computer readable media for optimizing placement of virtual network visibility components
CN110365748A (zh) 业务数据的处理方法和装置、存储介质及电子装置
TW201237655A (en) Information processing system, information processing apparatus, load balancing method, database deployment planning method, and program for realizing connection distribution for load balancing in distributed database
CN111240838B (zh) 一种压力测试方法和装置
JP2012099062A (ja) サービス連携システムおよび情報処理システム
CN110225137B (zh) 业务请求处理方法、系统、服务器及存储介质
JP2007164264A (ja) 負荷分散プログラム、負荷分散装置、サービスシステム
KR20200080458A (ko) 클라우드 멀티-클러스터 장치
US20050144280A1 (en) Load distribution system by inter-site cooperation
JP5661328B2 (ja) 効率的でコスト効率の良い分散呼受付制御
CN115373843A (zh) 一种动态预判最优路径设备的方法、装置、及介质
EP2625610B1 (fr) Allocation d'applications dans des centres de données
CN113268329A (zh) 一种请求调度方法、装置及存储介质
CN112737806B (zh) 网络流量的迁移方法及装置

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

WWE Wipo information: entry into national phase

Ref document number: 2004569568

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 11050058

Country of ref document: US