WO2011140951A1 - 负载均衡的方法、设备和系统 - Google Patents

负载均衡的方法、设备和系统 Download PDF

Info

Publication number
WO2011140951A1
WO2011140951A1 PCT/CN2011/073690 CN2011073690W WO2011140951A1 WO 2011140951 A1 WO2011140951 A1 WO 2011140951A1 CN 2011073690 W CN2011073690 W CN 2011073690W WO 2011140951 A1 WO2011140951 A1 WO 2011140951A1
Authority
WO
WIPO (PCT)
Prior art keywords
load
load balancer
client
node
balancer node
Prior art date
Application number
PCT/CN2011/073690
Other languages
English (en)
French (fr)
Inventor
段海峰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2011140951A1 publication Critical patent/WO2011140951A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method, device, and system for load balancing. Background technique
  • the server cluster system centralizes multiple servers to serve clients.
  • the client of the server cluster system service includes a Network Access Server (NAS) device, a Packet Data Serving Node (PDSN), and a Gateway General Packet Radio Service Technical Support Node (Gateway GPRS). Support Node, hereinafter referred to as GGSN) and other network routing devices.
  • NAS Network Access Server
  • PDSN Packet Data Serving Node
  • Gateway GPRS Gateway General Packet Radio Service Technical Support Node
  • GGSN Gateway General Packet Radio Service Technical Support Node
  • the server cluster shares the same virtual IP address. In the external client's view, it is equivalent to a server serving it.
  • a load balancer LoadBalance, hereinafter referred to as LB is required to load balance the server cluster.
  • the prior art server cluster system is composed of a load balancer node and a plurality of server nodes, and the load balancer node performs load balancing processing according to the load status of each server node. After the client's service flow is sent to the load balancer node, the load balancer node calculates the server node with the least load, and forwards the service flow to the server node for processing, thereby achieving load balancing.
  • the load balancer node When the traffic exceeds the maximum processing capacity of the load balancer node, it is usually necessary to add a new load balancer node to share the work of balancing the service flow.
  • the load balancer node When the load balancer node is added, the newly added load balancer node provides a virtual IP address to balance the service flow. Some service flows of the original load balancer node are routed to the virtual IP address of the newly added load balancer node, and the load balancer node shares the work of balancing the service flow.
  • each load balancer node works independently of each other. When the load balancer node fails and needs to be replaced, the service processed on the load balancer node needs to be interrupted.
  • each load balancer node works independently of each other, and multiple load balancer nodes cannot be effectively managed, which may result in each The load balance of the load balancer node is unbalanced, which affects the load balancing performance of all load balancer nodes as a whole.
  • the present invention provides a load balancing method, apparatus, and system for improving the overall balance performance of a load balancer node when a plurality of load balancer nodes are used to equalize service requests.
  • Embodiments of the present invention provide a method for load balancing, including:
  • ARP response information includes a MAC address of the load balancer node that meets a preset load condition, so that the client receives the service request after receiving the ARP response information.
  • the equalization process is performed on the load balancer node that meets the preset load condition.
  • the embodiment of the invention further provides a load balancing manager, including:
  • a first obtaining module configured to obtain an address resolution protocol ARP request sent by the client
  • a second acquiring module configured to select a load balancer node that meets a preset load condition in the load balancer cluster, and obtain the preset load Media access control MAC address of the conditional load balancer node
  • a sending module configured to send an ARP response message to the client, where the ARP response information is The MAC address of the load balancer node that meets the preset load condition is included, so that the client sends the service request to the load balancer node that meets the preset load condition after receiving the ARP response information. Perform equalization processing.
  • the embodiment of the present invention further provides a load balancing system, including the foregoing load balancing manager, and a load balancer cluster, where the load balancer cluster includes at least two load balancer nodes;
  • the load balancer node is configured to perform equalization processing on the service request after receiving the service request sent by the client.
  • the invention obtains the ARP request sent by the client, and selects the load balancer node that meets the preset load condition for the client to perform service balancing processing according to the ARP request, thereby realizing effective management of the load balancer cluster and improving the load balancer.
  • the overall balance processing effect of the cluster BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a flowchart of a method for load balancing according to a first embodiment of the present invention
  • Figure lb is a schematic diagram of an application scenario of a load balancing method according to the first embodiment of the present invention.
  • 2 is a flowchart of a method for load balancing according to a second embodiment of the present invention
  • FIG. 3 is a flowchart of a method for load balancing according to a third embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an LBM according to a fourth embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a load balancing system according to a fifth embodiment of the present invention. detailed description
  • FIG. 1 is a flowchart of a load balancing method according to a first embodiment of the present invention.
  • the execution entity of the load balancing method in this embodiment may be a load balancing manager (LoadBalance Manager, hereinafter referred to as LBM), and the client performs a cluster operation on the server cluster.
  • LBM load balancing manager
  • the device When the service is requested, the device obtains the address resolution protocol (ARP) request sent by the client, and returns the media access control of the load balancer node that meets the preset load condition to the client by using the ARP replacement method.
  • Access Control hereinafter referred to as the MAC address
  • the load balancer that meets the preset load condition is a load balancer with a small load, and the load balancer can be the load-balanced load balancer node in the load balancer cluster, or the current load and the rated load in the load balancer cluster.
  • the preset threshold can be set according to the specific application environment, such as 70%.
  • the method includes the following steps: Step 11. Acquire an ARP request sent by a client.
  • the client When the client makes a service request to the server cluster, the client first sends an ARP request to obtain a load balancer node that performs equalization processing for its own service request, and the load balancer node selects a server for the client to serve the client to the client.
  • the terminal returns the MAC address of the server, and the client sends a service request to the server according to the MAC address, and the server performs related service request processing.
  • each load balancer forms a load balancer cluster and shares a virtual IP address.
  • all load balancer nodes send their own MAC addresses to the LBM for registration. In addition to sending their own MAC addresses, they can also send their own device IDs (identifications) to make the load.
  • the registration information of the equalizer node is more sufficient for the LBM to manage.
  • the LBM stores the MAC address and device ID of all load balancer nodes for status detection and management of the load balancer nodes in subsequent processes.
  • the virtual IP address of the load balancer cluster needs to be configured to the port of the LBM, so that the ARP request sent by the client can be first routed to the LBM device.
  • the client requests service from the server cluster.
  • the client will first send an ARP request using the virtual IP address of the load balancer cluster.
  • the virtual IP address of the load balancer cluster is pre-configured on the LBM. Therefore, the ARP request is forwarded to the LBM after passing through the switch.
  • the LBM only obtains the ARP request of the client, and does not process the service request sent by the client, and the service request of the client is balanced by the load balancer node selected by the LBM. Therefore, the LBM device can handle more client ARP requests.
  • Step 12 Select a load balancer node that meets a preset load condition in the load balancer cluster, and obtain a MAC address of the load balancer node that meets the preset load condition.
  • the LBM needs to first obtain the load status of each load balancer node in the load balancer cluster, and then select the load balancer node that meets the preset load conditions.
  • the LBM can obtain the load status of the load balancer node in multiple ways. For example, each load balancer node can be preset to send load status information to the LBM periodically.
  • the load status information can include the number of services currently processed by the load balancer node.
  • the LBM can determine the load status of the load balancer node according to the load status information; or, the load balancer node in the load balancer cluster can be configured to periodically send heartbeat detection information to the LBM, and the heartbeat detection information is The above load status information is included.
  • the heartbeat detection information is used to notify the LBM of the health status and the load level of the load balancer node, such as notifying the LBM whether the load balancer node can work normally and whether the load exceeds the load.
  • the load balancer node sends the heartbeat detection information to the LBM with the load status information, it does not need to send the load status information to the LBM separately.
  • the LBM can select the load balancer node in the load balancer cluster that meets the preset load condition, and the selection method can be: selecting the load with the smallest load in the selected load balancer cluster. An equalizer node; or, selecting a load balancer node in the selected load balancer cluster that has a current load to rated load ratio less than a preset threshold.
  • the load status of the load balancer node such as the number of service requests being processed by the load balancer node, etc.
  • the LBM device After receiving the ARP request sent by the client, the LBM device can obtain the load status of each load balancer node according to the load status of the load balancer node.
  • the state selects a load balancer node in the server cluster that meets the preset load condition, and obtains the MAC address of the load balancer node.
  • the LBM adopts a method of selecting a load balancer node whose current load and rated load ratio are less than a preset threshold, and there are multiple load balancer nodes whose current load and rated load ratio are less than a preset threshold, random Choose the method to choose from.
  • Step 13 Send an ARP response message to the client, where the ARP response information includes the MAC address of the load balancer node that meets the preset load condition, so that the client sends the service request to the ARP response message
  • the equalization process is performed on the load balancer node that satisfies the preset load condition.
  • the LBM returns an ARP response message to the client, and the MAC address of the load balancer node that satisfies the preset load condition is included in the ARP response message, and after receiving the ARP response message, the client can perform the service according to the MAC address.
  • the request is sent to a load balancer node that satisfies the preset load condition for load balancing processing.
  • the load balancer node sends the service request data to the server with a smaller load according to the load status of the server cluster, and the server processes the service request of the client, thereby implementing load balancing of the server cluster.
  • the load balancer cluster can also add a new load balancer node to improve the load balancer cluster.
  • Load balancing processing capability When adding a new load balancer node, only the added load balancer node needs to register on the LBM and provide its MAC address and device ID to the LBM. There is no need to configure the entire load balancer cluster for downtime, nor do you need to perform additional routing configurations for the added load balancer nodes. It can avoid the frequent service interruption of the load balancer node device, and can theoretically support the unrestricted expansion of the load balancer cluster.
  • each additional load balancer node needs to reroute the service flow of the shared client to the newly added load balancer node, and needs to modify the existing network configuration to add load balancing.
  • the node and its neighboring network devices are configured accordingly, which increases the maintenance burden of the load balancer cluster. Therefore, the solution of the embodiment is simpler and more effective in managing the maintenance of the load balancer cluster than the prior art. The maintenance burden of the load balancer cluster during expansion is reduced.
  • the LBM is the core device of the entire system. Therefore, the security of the LBM needs to be ensured.
  • the LBM can also adopt a hot standby mechanism (Hot Redundant). That is, an LBM (Master) and an LBM (Slave) are set at the same time, and the LBM dual-system hot backup is adopted to ensure that the LBM can work safely and reliably.
  • the mechanism of the dual-system hot backup is similar to the prior art, and will not be described again. .
  • the ARP request sent by the client is obtained, and the load balancer node that meets the preset load condition is selected by the client to perform service balancing processing according to the ARP request, thereby effectively managing the load balancer cluster and improving load balancing.
  • the overall balance processing effect of the cluster is obtained, and the load balancer node that meets the preset load condition is selected by the client to perform service balancing processing according to the ARP request, thereby effectively managing the load balancer cluster and improving load balancing.
  • the detection and maintenance processing of the load balancer cluster may be further performed, such as detecting the state of the load balancer node, when detecting that the load balancer node is faulty or exceeds the load, The load balancing service on the load balancer node is transferred to another load balancer node.
  • FIG. 2 is a flowchart of a method for load balancing according to the embodiment.
  • the execution entity of the method in this embodiment may be an LBM, and mainly considers a service on a faulty load balancer when a load balancer node fails.
  • the request is transferred to a normal load balancer node for load balancing.
  • the LBM after receiving the ARP request from the client, and selecting the load balancer node to perform load balancing processing on the service request of the client, the LBM also needs to record the correspondence between the MAC address of the client and the load balancer node. In order to obtain a MAC address of the client that uses the load balancer when the load balancer node fails, to notify the client to re-do the service request.
  • Step 21 When it is determined that there is an unavailable load balancer node in the load balancer cluster, obtain a MAC address of the client that uses the unavailable load balancer node.
  • the load balancer node in the load balancer cluster may be configured to periodically send heartbeat detection information to the LBM, and the heartbeat detection information is used to notify the LBM of the load balancer.
  • the health status of the node and the load level such as notifying LBM whether the load balancer node can work normally and whether the load exceeds the load.
  • the LBM-time does not receive the heartbeat detection information of a certain load balancer node, it is considered that the load balancer node has failed, and the LBM determines the node as an unavailable node, and obtains that the unavailable is used.
  • the MAC address list of the client of the load balancer node so that the client's service request is transferred to other working load balancer nodes for equalization processing.
  • load status information of the load balancer node may be added to the heartbeat detection information, so that each load balancer node does not need to separately send load status information to the LBM, thereby simplifying system flow.
  • Step 22 According to the obtained MAC address of the client, the service request of the client is transferred to the load balancing node of the normal working to perform equalization processing.
  • the method for determining the load balancer node in the load balancer cluster that meets the preset load condition is the same as that in the first embodiment, and is not described here again.
  • the load balancer node can be used as a substitute for the unavailable load balancer node, which provides load balancing processing for the service request for the client corresponding to the unavailable load balancer node.
  • the LBM acquires a MAC address of a client that uses the unavailable load balancer node; sends an update notification to the client, where the update notification includes a MAC address of the load balancer node that satisfies a preset load condition, so that the client After receiving the update notification, the service request is sent according to the MAC address to the load balancer node that meets the preset load condition for equalization processing.
  • the LBM should also record the unavailable load balancer node, and no longer send a service request to the load balancer node when subsequently processing the client's request.
  • the LBM when a fault occurs in a load balancer node in the load balancer cluster, the LBM can load the unavailable load balancer node.
  • the service request of the balanced processing is sent to the backup load balancer node, so that the client service request is not stopped because the load balancer node is unavailable, and the load balancer cluster can provide better disaster recovery processing. ability.
  • the state of the load balancer node is determined by sending the heartbeat detection information.
  • the service request can be avoided from being sent to the load balancer node, and the traffic of the load balancer node can be transferred.
  • the disaster-tolerant processing performance of the load balancer cluster is improved, so that the load balancer cluster can provide better load balancing and improve the load balancing performance of the cluster system.
  • FIG. 3 is a flowchart of a method for load balancing according to the embodiment.
  • the execution body of the method in this embodiment may be an LBM, and mainly considers how to perform load balancing when a load balancer node exceeds a load.
  • the LBM after receiving the ARP request from the client, and selecting the load balancer node to perform load balancing processing on the service request of the client, the LBM also needs to record the correspondence between the MAC address of the client and the load balancer node. In order to obtain the MAC address of the client using the load balancer when the load balancer node is out of load, transfer the service request of some clients to other load balancer nodes for processing, to eliminate the load balancer node. Overload condition.
  • Step 31 When it is determined that there is an overload load balancer node in the load balancer cluster, the MAC address of the client with the super-capacity is obtained; the client with the super-capacity is the client that uses the overload load balancer node. A client that exceeds the load of the overload load balancer.
  • the load balancer node in the load balancer cluster may be configured to periodically send heartbeat detection information to the LBM, where the heartbeat detection information is used to notify the LBM of the health status and load level of the load balancer node, such as a notification.
  • LBM Whether the load balancer node can work normally and whether the load exceeds the load. If the heartbeat detection information of a load balancer node received by the LBM is poor or the load is too high, if the load exceeds a preset threshold, the load balancer node is considered to be overloaded. LBM acquires load balancing that has used this overload The MAC address of the client of the node and select the client with the super-capacity to obtain its MAC address.
  • the upper limit of the load balancer node can be 100 client service requests. When it carries 120 client service requests, it is considered to be overloaded and may not work properly.
  • the LBM needs to obtain the MAC addresses of the 20 clients in the client that uses the load balancer node, so as to transfer the service request of the client to the normal working load balancer node for service equalization processing.
  • Step 32 According to the MAC address of the client with the super-capacity; the service request of the client with the super-capacity is transferred to the load balancer that works normally to perform equalization processing.
  • the service request of the client with the super-capacity when the service request of the client with the super-capacity is transferred to the load balancer of the normal working, the service request of the client with the super-capacity can be directly transferred to the load balance that meets the preset load condition.
  • the method is as follows:
  • Determining the load balancer node in the load balancer cluster that meets the preset load condition; determining the load balancer node in the load balancer cluster that meets the preset load condition is the same as the method in the first embodiment, and details are not described herein again. After selecting a load balancer node that meets the preset load condition, the load balancer node can assume part of the client service request to alleviate the condition of the overload load balancer.
  • the LBM may select a service request of a part of the client that causes the load balancer node to exceed the load.
  • the load balancer 1 can carry a service request of up to 100 clients, and when the overload loader carries 120 When a service request is made, the service requests of the excess 20 clients are transferred to the load balancer 2 that satisfies the preset load condition. This eliminates the overload condition of the overloaded load balancer node 1.
  • the LBM transfers the client's service request to a negative that meets the preset load condition.
  • the load balancer 2 should not be overloaded after the transfer. If the service request of the transferred client is greater than the client traffic that can be carried by the load balancer 2 that meets the preset load condition, only part of the client traffic is transferred to the load balancer 2, such as the load balancer 2 Hosting 100 client service requests, currently carrying 90 client service requests, but the number of client service requests that need to be transferred is 20, then only 10 service requests can be transferred to the load balancer 2; then the LBM is again acquired.
  • the load balancer 3 presets the load condition, and transfers part of the traffic to the load balancer; according to the above method, all the traffic exceeding the load balancer 1 is transferred out. This achieves load balancing of the entire load balancer cluster.
  • the LBM when the load balancer node in the load balancer cluster exceeds the load, the LBM can send the service request for load balancing processing on the unavailable load balancer node to another load balancer. On the node, the load balancer cluster can provide better load balancing.
  • the state of the load balancer node is determined by sending heartbeat detection information.
  • the traffic of the load balancer node can be transferred to other load balancer nodes, so that load balancing is performed.
  • the cluster can provide better load balancing and improve the load balancing performance of the cluster system.
  • the LBM of the present embodiment may include: a first acquiring module 41, a second obtaining module 42, and sending Module 43.
  • the first obtaining module 41 is configured to obtain an address resolution protocol ARP request sent by the client, and the second obtaining module 42 is configured to select a load balancer node that meets a preset load condition in the load balancer cluster, and obtain the preset Media access control MAC address of the load balancer node of the load condition;
  • the sending module 43 is configured to send an ARP response message to the client, where the ARP response information includes a MAC address of the load balancer node that meets a preset load condition, so that the client receives the ARP response message and then the service The request is sent to the load balancer node that meets the preset load condition for equalization processing.
  • the second obtaining module 42 may select the load balancer node with the smallest load in the selected load balancer cluster when selecting the load balancer node that meets the preset load balancing condition; or select the selected load balancer cluster.
  • a load balancer node in which the current load to rated load ratio is less than a preset threshold.
  • the load balancer forms a load balancer cluster and shares a virtual IP address, and the virtual IP address is configured on the port of the LBM.
  • all load balancer nodes send their own device IDs and MAC addresses to the LBM for registration.
  • the LBM stores the MAC addresses and device IDs of all load balancer nodes for subsequent processes.
  • the load balancer node performs state detection and management.
  • the client requests a service from the server cluster, the client first sends an ARP request using the virtual IP address of the load balancer cluster.
  • the ARP request is forwarded to the LBM after passing through the switch.
  • the LBM selects a load balancer node that satisfies the preset load condition to perform equalization processing on the service request, and the load balancer node selects a server for the client to serve.
  • the detection and maintenance processing of the load balancer cluster may be further performed, such as detecting the state of the load balancer node, and when detecting that the load balancer node is faulty or exceeds the load, load balancing is performed on the load balancer node. The business is transferred to another load balancer node.
  • the LBM also includes:
  • the first determining module 44 is configured to determine whether an unbalanced load balancer node exists in the load balancer cluster
  • the third obtaining module 45 is configured to acquire, when the first determining module determines that the load balancer node is unavailable in the load balancer cluster, obtain a MAC address of the client that uses the unavailable load balancer node;
  • the first equalization module 46 is configured to: according to the acquired MAC address of the client, forward the service request of the client to a load balancing node that is working normally to perform equalization processing.
  • the first determining module 44 is configured to not receive the heartbeat detection information of the load balancer node in the load balancer cluster within a preset detection time. When it is determined, the load balancer node is an unavailable load balancer node.
  • the first equalization module 46 includes:
  • a determining unit 46a configured to determine a load balancer node in the load balancer cluster that meets a preset load condition
  • the updating unit 46b is configured to send, according to the acquired MAC address of the client, an update notification to the client, where the update notification includes a MAC address of the load balancer node that meets a preset load condition, so that the client is receiving After the update notification, the service request is sent to the load balancer node that meets the preset load condition for equalization processing.
  • the determining unit 46a first obtains the load status of each load balancer node in the load balancer cluster when determining the load balancer node with a small load; and calculates the preset load in the load balancer cluster according to the load state calculation of the load balancer node. Conditional load balancer node.
  • the LBM includes:
  • a second determining module 47 configured to determine whether an overload load equalizer node exists in the load balancer cluster
  • the fourth obtaining module 48 is configured to: when the second determining module determines that the overload load balancer node exists in the load balancer cluster, obtain a MAC address of the client with a super-capacity; and the client with the super-capacity uses the The client of the overloaded load balancer node that exceeds the load of the overload load balancer;
  • the second equalization module 49 is configured to: according to the MAC address of the client of the super-capacity; and forward the service request of the super-capacity client to the load balancer of the normal operation for equalization processing.
  • the service request of the super-capacity client is transferred to the normal working load balancer for equalization.
  • the service request of the super-capacity client can be directly transferred to the load balancer node that meets the preset load condition, and the second equalization module 49 includes:
  • a determining unit 49a configured to determine a load balancer node in the load balancer cluster that meets a preset load condition
  • the updating unit 49b is configured to send, according to the MAC address of the client of the super-capacity, an update notification to the client, where the update notification includes the MAC address of the load balancer node that meets the preset load condition, so that the client After receiving the update notification, according to the MAC address of the load balancer node that meets the preset load condition, the service request is sent to the load balancer node that meets the preset load condition to perform equalization processing.
  • the determining unit 49a first obtains the load status of each load balancer node in the load balancer cluster when determining the load balancer node with a small load; and calculates the preset load in the load balancer cluster according to the load state calculation of the load balancer node.
  • Conditional load balancer node first obtains the load status of each load balancer node in the load balancer cluster when determining the load balancer node with a small load; and calculates the preset load in the load balancer cluster according to the load state calculation of the load balancer node.
  • the ARP request sent by the client is obtained, and the load balancer node that meets the preset load condition is selected by the client to perform service balancing processing according to the ARP request, thereby effectively managing the load balancer cluster and improving load balancing. Balanced processing effect of the cluster.
  • the load balancer node and the overloaded load balancer node are found to be unavailable, the traffic of the load balancer node can be transferred to other load balancer nodes, thereby improving the disaster recovery performance of the load balancer cluster and Load balancing performance.
  • FIG. 5 is a schematic structural diagram of a load balancing system according to a fifth embodiment of the present invention.
  • the system includes an LBM 51 and a load balancer cluster 52.
  • the load balancer cluster 52 includes at least two load balancer nodes 52a.
  • the structure and function of the LBM are the same as those of the fourth embodiment corresponding to FIG. 4, and the functions and interaction mechanisms and effects between the modules may be described in the corresponding embodiments of FIG. 1 to FIG. Narration.
  • the load balancer node 52a is configured to perform equalization processing on the service request after receiving the service request sent by the client.
  • the load balancer node 52a is further configured to send load status information to the LBM;
  • the LBM 51 is further configured to determine a load status of the load balancer node 52a based on load status information of the load balancer node 52a.
  • the load balancer node 52a in the load balancer cluster 52 may be configured to periodically send heartbeat detection information to the LBM 51, and the heartbeat detection information is used to notify the LBM 51 of the health status and load level of the load balancer node 52a. So that the LBM 51 can determine if the load balancer node is available. When the LBM 51 does not receive the heartbeat detection information of a certain load balancer node for a certain period of time, it considers that the load balancer node has failed, and the LBM 51 determines the node as a node that is unavailable.
  • the load balancer node 52a is further configured to send heartbeat detection information to the LBM 51 within a preset time interval;
  • the LBM 51 is further configured to: when the heartbeat detection information of the load balancer node in the load balancer cluster is not received within a preset detection time, determine that the load balancer node is an unavailable load balancer node.
  • the ARP request sent by the client is obtained, and the load balancer node that meets the preset load condition is selected by the client to perform service balancing processing according to the ARP request, thereby effectively managing the load balancer cluster and improving load balancing. Balanced processing effect of the cluster.
  • the load balancer node and the overloaded load balancer node are found to be unavailable, the traffic of the load balancer node can be transferred to other load balancer nodes, thereby improving the disaster recovery performance of the load balancer cluster and Load balancing performance.
  • modules in the apparatus in the embodiments may be distributed in the apparatus of the embodiment according to the embodiment, or may be correspondingly changed in one or more apparatuses different from the embodiment.
  • the modules of the above embodiments may be combined into one module, or may be further split into multiple sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Description

负载均衡的方法、 设备和系统 本申请要求 2010年 08月 25日递交的申请号为 201010268922.2、 发明名 称为 "负载均衡的方法、 设备和系统" 的中国专利申请的优先权, 其全部内 容通过引用结合在本申请中。 技术领域
本发明涉及计算机技术领域, 特别涉及一种负载均衡的方法、 设备和系 统。 背景技术
服务器集群系统将多个服务器集中起来为客户端提供服务。 服务器集群 系统服务的客户端包括网络接入服务器( Network Access Server, 以下简称 NAS )设备,分组数据服务节点( Packet Data Serving Node ,以下简称 PDSN ) , 以及网关通用分组无线服务技术支持节点 ( Gateway GPRS Support Node , 以 下简称 GGSN )等网络路由设备。
服务器集群对外共用同一个虚拟 IP地址, 在外部客户端看来, 就相当于 是一台服务器为其提供服务。 为避免各服务器承担的负载不均衡, 需要采用 负载均衡器(LoadBalance, 以下简称 LB )对服务器集群进行负载的均衡处 理。
现有技术的服务器集群系统, 由负载均衡器节点加上若干个服务器节点 组成, 负载均衡器节点根据各服务器节点的负载状态进行负载均衡处理。 客 户端的业务流发送到负载均衡器节点上后, 负载均衡器节点计算出负载最小 的服务器节点, 并将业务流转发到该服务器节点上进行处理, 由此实现负载 均衡。
当业务量超过负载均衡器节点的最大处理能力时, 通常需要增加新的负 载均衡器节点分担均衡处理业务流的工作。 在增加负载均衡器节点时, 新增 的负载均衡器节点单独对外提供一个虚拟 IP地址进行业务流的均衡处理, 将 原负载均衡器节点的某些业务流路由到新增的负载均衡器节点的虚拟 IP地址 上, 由该负载均衡器节点分担均衡处理业务流的工作。 服务器集群系统配备 了多个负载均衡器节点时, 各负载均衡器节点之间彼此独立工作, 当负载均 衡器节点出现故障需要更换时, 需要中断该负载均衡器节点上处理的业务。
发明人发现, 现有技术中的服务器集群系统中, 当存在多个负载均衡器 节点时, 各负载均衡器节点彼此独立工作, 无法对多个负载均衡器节点进行 有效的管理, 可能会导致各负载均衡器节点的负载不均衡, 影响了所有负载 均衡器节点整体的负载均衡性能。 发明内容
本发明提供了一种负载均衡的方法、 设备和系统, 以实现在使用多个负 载均衡器节点对业务请求进行均衡处理时, 提高负载均衡器节点整体的均衡 性能。
本发明实施例提供了一种负载均衡的方法, 包括:
获取客户端发送的地址解析协议 ARP请求;
选取负载均衡器集群中满足预设负载条件的负载均衡器节点, 并获取所 述满足预设负载条件的负载均衡器节点的介质访问控制 MAC地址;
向所述客户端发送 ARP应答信息, 所述 ARP应答信息中包括所述满足 预设负载条件的负载均衡器节点的 MAC地址, 以使得所述客户端收到所述 ARP应答信息后将业务请求发送到所述满足预设负载条件的负载均衡器节点 上进行均衡处理。
本发明实施例还提供了一种负载均衡管理器, 包括:
第一获取模块, 用于获取客户端发送的地址解析协议 ARP请求; 第二获取模块, 用于选取负载均衡器集群中满足预设负载条件的负载均 衡器节点, 并获取所述满足预设负载条件的负载均衡器节点的介质访问控制 MAC地址;
发送模块, 用于向所述客户端发送 ARP应答信息, 所述 ARP应答信息 中包括所述满足预设负载条件的负载均衡器节点的 MAC地址, 以使得所述 客户端收到所述 ARP应答信息后将业务请求发送到所述满足预设负载条件的 负载均衡器节点上进行均衡处理。
本发明实施例还提供了一种负载均衡系统, 包括上述的负载均衡管理 器, 还包括负载均衡器集群, 所述负载均衡器集群至少包括两个负载均衡器 节点;
所述负载均衡器节点, 用于在接收到所述客户端发送的业务请求后, 对 所述业务请求进行均衡处理。 本发明通过获取客户端发送的 ARP请求, 并根据 ARP请求为客户端选 择满足预设负载条件的负载均衡器节点进行业务均衡处理, 实现了对负载均 衡器集群的有效管理, 提高了负载均衡器集群整体的均衡处理效果。 附图说明 图 la为本发明第一实施例提供的负载均衡的方法流程图;
图 lb为本发明第一实施例提供的负载均衡方法的应用场景示意图。 图 2为本发明第二实施例提供的负载均衡的方法流程图;
图 3为本发明第三实施例提供的负载均衡的方法流程图;
图 4为本发明第四实施例提供的 LBM的结构示意图;
图 5为本发明第五实施例提供的负载均衡系统的结构示意图。 具体实施方式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本发 明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于 本发明中的实施例, 本领域普通技术人员在没有作出创造性劳动前提下所获 得的所有其它实施例, 都属于本发明保护的范围。 参见图 la为本发明第一实施例提供的负载均衡的方法流程图,本实施例 负载均衡方法的执行主体可以为负载均衡管理器( LoadBalance Manager, 以 下简称 LBM ) , 在客户端向服务器集群进行业务请求时, 通过获取客户端发 送的地址解析协议( Address Resolution Protocol , 以下简称 ARP )请求, 采用 ARP替换的方式, 向客户端返回满足预设负载条件的负载均衡器节点的介质 访问控制 ( Media Access Control , 以下简称 MAC )地址, 由该负载均衡器 节点对客户端的业务请求进行均衡处理, 即选择负载较小的服务器为客户端 提供服务。 其中, 满足预设负载条件的负载均衡器为负载较小的负载均衡器, 该负载均衡器可以为负载均衡器集群中负载最小的负载均衡器节点, 或负载 均衡器集群中当前负载与额定负载比值小于预设阀值的负载均衡器节点, 预 设阀值可以根据具体应用环境进行设置, 如可为 70%。 该方法包括以下步骤: 步骤 11、 获取客户端发送的 ARP请求。
客户端在向服务器集群进行业务请求时,先发送 ARP请求以获取为自身 的业务请求进行均衡处理的负载均衡器节点, 由该负载均衡器节点为客户端 选择出为其服务的服务器, 向客户端返回该服务器的 MAC地址, 客户端根 据该 MAC地址向该服务器发送业务请求, 由该服务器进行相关的业务请求 处理。
参见图 lb为本实施例负载均衡方法的应用场景示意图, 在本实施例中, 各负载均衡器组成负载均衡器集群, 并共用一个虚拟 IP地址。 在负载均衡器 集群进行负载均衡前, 所有的负载均衡器节点将自身的 MAC 地址发送给 LBM进行注册, 在注册时除发送自身 MAC地址外还可以同时发送自身的设 备 ID (标识), 使负载均衡器节点的注册信息更充分, 便于 LBM进行管理。 LBM存储所有负载均衡器节点的 MAC地址和设备 ID,以在后续过程中对负 载均衡器节点进行状态检测和管理。 此外, 还需要将负载均衡器集群的虚拟 IP地址也配置到该 LBM的端口上, 由此使得客户端发送的 ARP请求能先被 路由到该 LBM设备上。
在完成上述的预先配置后, 如图 lb所示, 客户端在向服务器集群请求服 务时, 客户端会先使用负载均衡器集群的虚拟 IP地址发送 ARP请求, 而负 载均衡器集群的虚拟 IP地址已预先配置在 LBM上, 因此该 ARP请求经过交 换机后会转发到该 LBM上。
在本实施例中 , 该 LBM仅获取客户端的 ARP请求 , 并不处理客户端发 送的业务请求,客户端的业务请求由 LBM选择的负载均衡器节点进行均衡处 理。 因此, LBM设备能够处理更多的客户端的 ARP请求。
步骤 12、 选取负载均衡器集群中满足预设负载条件的负载均衡器节点, 并获取该满足预设负载条件的负载均衡器节点的 MAC地址。
该 LBM 需要先获取负载均衡器集群中的每个负载均衡器节点的负载状 态, 然后从中选取满足预设负载条件的负载均衡器节点。 LBM获取负载均衡 器节点的负载状态的方式可以有多种, 如可预先设置每个负载均衡器节点定 期向 LBM发送负载状态信息,负载状态信息中可包括负载均衡器节点当前处 理的业务数量等负载信息, LBM根据该负载状态信息就能够确定负载均衡器 节点的负载状态; 或者, 也可以预先设置负载均衡器集群中的负载均衡器节 点定期向 LBM发送心跳检测信息,并在该心跳检测信息中附带上述的的负载 状态信息。其中, 心跳检测信息用于向 LBM通知该负载均衡器节点的健康状 态以及负载级别,如通知 LBM该负载均衡器节点是否能正常工作以及负载是 否超出负荷等。负载均衡器节点在向 LBM发送心跳检测信息时附带负载状态 信息, 就不用再向 LBM单独发送负载状态信息了。
LBM在获取了各负载均衡器的负载状态后,就可以选取负载均衡器集群 中满足预设负载条件的负载均衡器节点, 选取方法可以为: 选取所述选取负 载均衡器集群中负载最小的负载均衡器节点; 或, 选取所述选取负载均衡器 集群中当前负载与额定负载比值小于预设阀值的负载均衡器节点。
由于该 LBM 中预先存储了负载均衡器集群中的所有负载均衡器节点的 MAC地址, 并可以获取负载均衡器节点的负载状态, 如负载均衡器节点正在 处理的业务请求的数量等。 在接收到客户端发送的 ARP请求后, 该 LBM设 备可以获取各负载均衡器节点的负载状态, 根据该负载均衡器节点的负载状 态选择出服务器集群中满足预设负载条件的负载均衡器节点, 并获取到该负 载均衡器节点的 MAC地址。 其中, 若 LBM采用选取当前负载与额定负载比 值小于预设阀值的负载均衡器节点的方式, 且当前负载与额定负载比值小于 预设阀值的负载均衡器节点有多个时, 可采用随机选取的方式从中选取。
步骤 13、 向该客户端发送 ARP应答信息, 该 ARP应答信息中包括该满 足预设负载条件的负载均衡器节点的 MAC地址,以使得该客户端收到该 ARP 应答信息后将业务请求发送到该满足预设负载条件的负载均衡器节点上进行 均衡处理。
LBM向客户端返回 ARP应答信息, 在该 ARP应答信息中附带该满足 预设负载条件的负载均衡器节点的 MAC地址, 客户端收到该 ARP应答信息 后, 就可以根据该 MAC地址, 将业务请求发送到满足预设负载条件的负载 均衡器节点上进行负载均衡处理。 该负载均衡器节点根据服务器集群的负载 状况, 将业务请求数据发送到负载较小的服务器上, 由该服务器处理客户端 的业务请求, 由此实现了服务器集群的负载均衡。
通过上述方法可知, 由于采用了 LBM对负载均衡器集群进行统一管理, 当客户端的业务请求量增大时, 负载均衡器集群也可以相应的增加新的负载 均衡器节点以提高负载均衡器集群的负载均衡处理能力, 在增加新的负载均 衡器节点时, 仅需要该增加的负载均衡器节点在 LBM上进行注册, 向 LBM 提供其 MAC地址和设备 ID即可。 而不需整个负载均衡器集群停机进行相关 配置, 也不需要对增加的负载均衡器节点进行其他额外的路由配置。 能够避 免添加或减少负载均衡器节点设备时频繁的业务中断, 且理论上可以支持负 载均衡器集群的无限制的扩容; 在负载均衡器集群扩容后不需要对现网设备 的配置进行修改, 降低了管理和维护的成本。 而现有技术中, 每增加一台负 载均衡器节点时, 就需要将分担出的客户端的业务流重新路由到新增的负载 均衡器节点上, 并需要修改现网配置, 对新增负载均衡器节点及其相邻的网 络设备进行相关配置, 增加了负载均衡器集群的维护负担。 因此, 本实施例 的方案相比现有技术, 对负载均衡器集群的维护的管理更加简便有效, 且降 低了负载均衡器集群进行扩建时的维护负担。
本实施例中, LBM为整个系统的核心设备, 因此, 需要确保 LBM工作 的安全性, 参见图 lb所示, 在本实施例中, LBM也可以采用双机热备份的 机制 ( Hot Redundant ) , 即同时设置一 LBM ( Master )和一 LBM ( Slave ) , 通过采用 LBM双机热备份的方式, 确保 LBM能够安全可靠的工作, 采用的 双机热备份的机制与现有技术类似, 不再贅述。
本实施例通过获取客户端发送的 ARP请求, 并根据 ARP请求为客户端 选择满足预设负载条件的负载均衡器节点进行业务均衡处理, 实现了对负载 均衡器集群的有效管理, 提高了负载均衡器集群整体的均衡处理效果。
在上述第一实施例的技术方案的基础上, 还可以进一步进行负载均衡器 集群的检测和维护处理, 如检测负载均衡器节点的状态, 当检测到负载均衡 器节点发生故障或者超出负荷时, 则将该负载均衡器节点上进行负载均衡的 业务转移到别的负载均衡器节点上。
参见图 2为本实施例的负载均衡的方法流程图, 本实施例的方法的执行 主体可以为 LBM, 主要考虑当某个负载均衡器节点发生了故障时, 将故障的 负载均衡器上的业务请求转移到正常的负载均衡器节点上进行负载均衡处 理。
在本实施例中, LBM在接收到客户端的 ARP请求, 并选择了负载均衡 器节点对该客户端的业务请求进行负载均衡处理之后, 还需要记录该客户端 的 MAC地址与负载均衡器节点的对应关系, 以便于在负载均衡器节点发生 了故障时, 获取使用了该负载均衡器的客户端的 MAC地址, 以通知该客户 端重新进行业务请求。
本实施例的方法包括:
步骤 21、 当判断出负载均衡器集群中存在不可用负载均衡器节点时, 则 获取使用了该不可用的负载均衡器节点的客户端的 MAC地址。
在本实施例中, 可以预先设置负载均衡器集群中的负载均衡器节点定期 向 LBM发送心跳检测信息, 该心跳检测信息用于向 LBM通知该负载均衡器 节点的健康状态以及负载级别,如通知 LBM该负载均衡器节点是否能正常工 作以及负载是否超出负荷等。当 LBM—定时间没有接收到某个负载均衡器节 点的心跳检测信息时, 则认为该负载均衡器节点发生了故障, LBM将该节点 判定为不可用的节点, 获取已使用了该不可用的负载均衡器节点的客户端的 MAC地址列表,以便将该客户端的业务请求转移到其他能正常工作的负载均 衡器节点上进行均衡处理。
此外,还可以在该心跳检测信息中附加负载均衡器节点的负载状态信息, 以使得各负载均衡器节点不需要再单独向 LBM发送负载状态信息,从而简化 了系统流程。
步骤 22、 根据获取的该客户端的 MAC地址; 将该客户端的业务请求转 由正常工作的负载均衡器节点进行均衡处理。
通常情况下, 在负载均衡器节点发生故障时, 可以直接将其处理的业务 请求转由该满足预设负载条件的负载均衡器节点进行处理。
确定负载均衡器集群中满足预设负载条件的负载均衡器节点的方法与第 一实施例中的方法相同, 此处不再贅述, 在选择出满足预设负载条件的负载 均衡器节点后, 就可以将该负载均衡器节点作为不可用负载均衡器节点的替 补, 由其为不可用负载均衡器节点对应的客户端提供业务请求的负载均衡处 理。
LBM获取使用了该不可用负载均衡器节点的客户端的 MAC地址; 向这 些客户端发送更新通知, 该更新通知中包括该满足预设负载条件的负载均衡 器节点的 MAC地址, 以使得该客户端在接收到该更新通知后, 根据该 MAC 地址将业务请求发送到该满足预设负载条件的负载均衡器节点上进行均衡处 理。
此外, LBM还应记录下该不可用的负载均衡器节点, 在后续处理客户端 的请求时, 不再向该负载均衡器节点发送业务请求。
通过上述方法可知, 本实施例中, 在负载均衡器集群中的某个负载均衡 器节点发生了故障不可用时, LBM能将该不可用负载均衡器节点上进行负载 均衡处理的业务请求发送到替补的负载均衡器节点上, 使得不会因为负载均 衡器节点发生了不可用而造成客户端业务请求的停止, 也使得负载均衡器集 群能够提供更好的容灾处理能力。
本实施例通过发送心跳检测信息判断负载均衡器节点的状态, 当发现不 可用负载均衡器节点, 能避免再向该负载均衡器节点发送业务请求, 并能够 将该负载均衡器节点的业务量转移到其他的负载均衡器节点上, 提高了负载 均衡器集群的容灾处理性能, 使得负载均衡器集群能够提供更好的负载均衡 效果, 提高了集群系统的负载均衡性能。
参见图 3为本实施例的负载均衡的方法流程图, 本实施例的方法的执行 主体可以为 LBM, 主要考虑当某个负载均衡器节点超出负荷时如何进行负载 均衡的处理;
在本实施例中, LBM在接收到客户端的 ARP请求, 并选择了负载均衡 器节点对该客户端的业务请求进行负载均衡处理之后, 还需要记录该客户端 的 MAC地址与负载均衡器节点的对应关系, 以便于在负载均衡器节点超出 负荷时, 获取使用了该负载均衡器的客户端的 MAC地址, 将部分客户端的 业务请求转移到其他负载均衡器节点上进行处理, 以消除该负载均衡器节点 的超负荷状况。
本实施例的方法包括:
步骤 31、 当判断出负载均衡器集群中存在超负荷负载均衡器节点时, 则 获取超承载量的客户端的 MAC地址; 超承载量的客户端为使用该超负荷负 载均衡器节点的客户端中超出该超负荷负载均衡器承载量的客户端。
在本实施例中, 可以预先设置负载均衡器集群中的负载均衡器节点定期 向 LBM发送心跳检测信息, 该心跳检测信息用于向 LBM通知该负载均衡器 节点的健康状态以及负载级别,如通知 LBM该负载均衡器节点是否能正常工 作以及负载是否超出负荷等。若 LBM接收到的某个负载均衡器节点的心跳检 测信息中, 其健康状态信息较差或者负载过高, 如负载超出了预设的阈值时, 则认为该负载均衡器节点超负荷了。 LBM获取已使用了该超负荷的负载均衡 器节点的客户端的 MAC地址,并选择出其中超承载量的客户端,获取其 MAC 地址。 如负载均衡器节点所能承受的上限是 100个客户端的业务请求, 当其 承载了 120个客户端的业务请求时, 就认为其超负荷了, 有可能会无法正常 工作。 则 LBM需要获取使用了该负载均衡器节点的客户端中的 20个客户端 的 MAC地址, 以便将该部分的客户端的业务请求转移到其他正常工作的负 载均衡器节点上进行业务均衡处理。
步骤 32、 根据该超承载量的客户端的 MAC地址; 将该超承载量的客户 端的业务请求转由正常工作的负载均衡器进行均衡处理。
在本实施例中, 在将超承载量的客户端的业务请求转由正常工作的负载 均衡器进行均衡处理时, 可以将超承载量的客户端的业务请求直接转移到满 足预设负载条件的负载均衡器节点上, 方法如下:
确定负载均衡器集群中满足预设负载条件的负载均衡器节点; 确定负载 均衡器集群中满足预设负载条件的负载均衡器节点的方法与第一实施例中的 方法相同, 此处不再贅述, 在选择出满足预设负载条件的负载均衡器节点后, 就可以由该负载均衡器节点承担部分客户端业务请求, 以緩解超负荷负载均 衡器的状况。
根据该超承载量的客户端的 MAC地址; 向该客户端发送更新通知, 该 更新通知中包括该满足预设负载条件的负载均衡器节点的 MAC地址, 以使 得该客户端在接收到该更新通知后, 根据该满足预设负载条件的负载均衡器 节点的 MAC地址, 将业务请求发送到该满足预设负载条件的负载均衡器节 点上进行均衡处理。
举例说明如下: LBM在选择客户端时, 可以选择使负载均衡器节点超出 负荷的部分客户端的业务请求, 如负载均衡器 1最多能承载 100个客户端的 业务请求, 则当其超负荷承载了 120个业务请求时, 则将超出的 20个客户端 的业务请求转移到满足预设负载条件的负载均衡器 2上。 由此消除了超负荷 的负载均衡器节点 1的超负荷状况。
需要说明的是, LBM将客户端的业务请求转移到满足预设负载条件的负 载均衡器 2上后, 应使得转移后负载均衡器 2不会超负荷。 若转移的客户端 的业务请求大于该满足预设负载条件的负载均衡器 2还能承载的客户端业务 量, 则只向该负载均衡器 2转移部分客户端业务量, 如负载均衡器 2的最多 承载 100个客户端业务请求, 当前承载了 90个客户端业务请求, 但需要转移 的客户端业务请求量为 20个,则可以只向负载均衡器 2转移 10个业务请求; 然后 LBM再次获取满足预设负载条件的负载均衡器 3 ,将部分业务量转移到 该负载均衡器上; 按照上述方法直至将负载均衡器 1上超出的全部业务量转 移出去。 由此实现了整个负载均衡器集群的负载均衡。
通过上述方法可知, 本实施例中, 在负载均衡器集群中的负载均衡器节 点超出负荷时, LBM能将该不可用负载均衡器节点上进行负载均衡处理的业 务请求发送到其他的负载均衡器节点上, 使得负载均衡器集群能够提供更好 的负载均衡效果。
本实施例通过发送心跳检测信息判断负载均衡器节点的状态, 当发现超 负荷的负载均衡器节点时, 能够将该负载均衡器节点的业务量转移到其他的 负载均衡器节点上, 使得负载均衡器集群能够提供更好的负载均衡效果, 提 高了集群系统的负载均衡性能。
图 4为本发明第四实施例提供的 LBM的结构示意图,如图 4所示,与前 述方法实施例对应, 本实施例的 LBM可包括: 第一获取模块 41、 第二获取 模块 42和发送模块 43。
第一获取模块 41 , 用于获取客户端发送的地址解析协议 ARP请求; 第二获取模块 42, 用于选取负载均衡器集群中满足预设负载条件的负载 均衡器节点, 并获取该满足预设负载条件的负载均衡器节点的介质访问控制 MAC地址;
发送模块 43 , 用于向该客户端发送 ARP应答信息, 该 ARP应答信息中 包括该满足预设负载条件的负载均衡器节点的 MAC地址, 以使得该客户端 收到该 ARP应答信息后将业务请求发送到该满足预设负载条件的负载均衡器 节点上进行均衡处理。 其中,第二获取模块 42在选取满足预设负载均衡条件的负载均衡器节点 时, 可选取所述选取负载均衡器集群中负载最小的负载均衡器节点; 或, 选 取所述选取负载均衡器集群中当前负载与额定负载比值小于预设阀值的负载 均衡器节点。
本实施例中,负载均衡器组成负载均衡器集群,并共用一个虚拟 IP地址, 且该虚拟 IP地址配置到该 LBM的端口上。 在负载均衡器集群进行负载均衡 前, 所有的负载均衡器节点将自身的设备 ID以及 MAC地址发送给 LBM进 行注册, LBM存储所有负载均衡器节点的 MAC地址和设备 ID, 以在后续过 程中对负载均衡器节点进行状态检测和管理。 客户端在向服务器集群请求服务时, 客户端会先使用负载均衡器集群的 虚拟 IP地址发送 ARP请求,该 ARP请求经过交换机后会转发到该 LBM上。 LBM选择出满足预设负载条件的负载均衡器节点对业务请求进行均衡处理, 由该负载均衡器节点为客户端选择出为其服务的服务器。
此外, 还可以进一步进行负载均衡器集群的检测和维护处理, 如检测负 载均衡器节点的状态, 当检测到负载均衡器节点发生故障或者超出负荷时, 则将该负载均衡器节点上进行负载均衡的业务转移到别的负载均衡器节点 上。
从负载均衡器集群可能存在不可用的负载均衡器节点的方面考虑, 该 LBM还包括:
第一判断模块 44, 用于判断该负载均衡器集群中是否存在不可用负载均 衡器节点;
第三获取模块 45, 用于在第一判断模块判断出该负载均衡器集群中存在 不可用负载均衡器节点时, 获取使用该不可用负载均衡器节点的客户端的 MAC地址;
第一均衡模块 46, 用于根据获取的该客户端的 MAC地址; 将该客户端 的业务请求转由正常工作的负载均衡器节点进行均衡处理。 在判断负载均衡器集群中是否存在不可用负载均衡器节点时, 第一判断 模块 44, 用于当在预设的检测时间内未收到负载均衡器集群中的负载均衡器 节点的心跳检测信息时,则判定该负载均衡器节点为不可用负载均衡器节点。
通常情况下, 在负载均衡器节点发生故障时, 可以直接将其处理的业务 请求转由满足预设负载条件的负载均衡器节点进行处理。第一均衡模块 46包 括:
确定单元 46a,用于确定负载均衡器集群中满足预设负载条件的负载均衡 器节点;
更新单元 46b, 用于根据获取的该客户端的 MAC地址; 向该客户端发送 更新通知,该更新通知中包括该满足预设负载条件的负载均衡器节点的 MAC 地址, 以使得该客户端在接收到该更新通知后, 将业务请求发送到该满足预 设负载条件的负载均衡器节点上进行均衡处理。
确定单元 46a在确定负载小的负载均衡器节点时, 先获取负载均衡器集 群中各负载均衡器节点的负载状态; 根据该负载均衡器节点的负载状态计算 得到负载均衡器集群中满足预设负载条件的负载均衡器节点。
若需要考虑某个负载均衡器节点超出负荷时如何进行负载均衡的处理, 则该 LBM包括:
第二判断模块 47, 用于判断该负载均衡器集群中是否存在超负荷负载均 衡器节点,
第四获取模块 48, 用于在第二判断模块判断出该负载均衡器集群中存在 超负荷负载均衡器节点时, 获取超承载量的客户端的 MAC地址; 该超承载 量的客户端为使用该超负荷负载均衡器节点的客户端中超出该超负荷负载均 衡器承载量的客户端;
第二均衡模块 49, 用于根据该超承载量的客户端的 MAC地址; 将该超 承载量的客户端的业务请求转由正常工作的负载均衡器进行均衡处理。
在将超承载量的客户端的业务请求转由正常工作的负载均衡器进行均衡 处理时, 可以将超承载量的客户端的业务请求直接转移到满足预设负载条件 的负载均衡器节点上, 第二均衡模块 49包括:
确定单元 49a,用于确定负载均衡器集群中满足预设负载条件的负载均衡 器节点;
更新单元 49b, 用于根据该超承载量的客户端的 MAC地址; 向该客户端 发送更新通知, 该更新通知中包括该满足预设负载条件的负载均衡器节点的 MAC地址, 以使得该客户端在接收到该更新通知后, 根据该满足预设负载条 件的负载均衡器节点的 MAC地址, 将业务请求发送到该满足预设负载条件 的负载均衡器节点上进行均衡处理。
确定单元 49a在确定负载小的负载均衡器节点时, 先获取负载均衡器集 群中各负载均衡器节点的负载状态; 根据该负载均衡器节点的负载状态计算 得到负载均衡器集群中满足预设负载条件的负载均衡器节点。
本实施例中的 LBM的功能,及其各模块之间的交互机理和效果可参见图 la ~图 3对应实施例的记载, 在此不再赞述。
本实施例通过获取客户端发送的 ARP请求, 并根据 ARP请求为客户端 选择满足预设负载条件的负载均衡器节点进行业务均衡处理, 实现了对负载 均衡器集群的有效管理, 提高了负载均衡器集群的均衡处理效果。 当发现不 可用负载均衡器节点和超负荷的负载均衡器节点时, 能够将该负载均衡器节 点的业务量转移到其他的负载均衡器节点上, 提高了负载均衡器集群的容灾 处理性能和负载均衡性能。
图 5为本发明第五实施例提供的负载均衡系统的结构示意图, 该系统 包括 LBM51和负载均衡器集群 52,该负载均衡器集群 52至少包括两个负载 均衡器节点 52a。
其中, LBM的结构与功能与图 4对应的第四实施例的记载相同, 其功 能及其各模块之间的交互机理和效果可参见图 la ~图 3对应实施例的记 载, 在此不再贅述。 该负载均衡器节点 52a, 用于在接收到该客户端发送的业务请求后, 对该业务请求进行均衡处理。
该负载均衡器节点 52a, 还用于向该 LBM发送负载状态信息;
该 LBM51 , 还用于根据该负载均衡器节点 52a的负载状态信息, 确定该 负载均衡器节点 52a的负载状态。
在本实施例中, 可以预先设置负载均衡器集群 52中的负载均衡器节 点 52a定期向 LBM51发送心跳检测信息, 该心跳检测信息用于向 LBM51 通知负载均衡器节点 52a的健康状态以及负载级别, 以便 LBM51能够判 断该负载均衡器节点是否可用。 当 LBM51—定时间没有接收到某个负载 均衡器节点的心跳检测信息时, 则认为该负载均衡器节点发生了故障, LBM51将该节点判定为不可用的节点。
该负载均衡器节点 52a,还用于在预设的时间间隔内向该 LBM51发送 心跳检测信息;
该 LBM51 , 还用于在预设的检测时间内未收到负载均衡器集群中的 负载均衡器节点的心跳检测信息时, 则判定该负载均衡器节点为不可用负 载均衡器节点。
本实施例中的 LBM51和负载均衡器节点 52a之间的交互机理和效果 可参见图 1 ~图 4对应实施例的记载, 在此不再贅述。
本实施例通过获取客户端发送的 ARP请求, 并根据 ARP请求为客户 端选择满足预设负载条件的负载均衡器节点进行业务均衡处理, 实现了对 负载均衡器集群的有效管理, 提高了负载均衡器集群的均衡处理效果。 当 发现不可用负载均衡器节点和超负荷的负载均衡器节点时, 能够将该负载 均衡器节点的业务量转移到其他的负载均衡器节点上, 提高了负载均衡器 集群的容灾处理性能和负载均衡性能。 本领域普通技术人员可以理解: 附图只是一个实施例的示意图, 附图中 的模块或流程并不一定是实施本发明所必须的。 本领域普通技术人员可以理解: 实施例中的装置中的模块可以按照实施 例描述分布于实施例的装置中, 也可以进行相应变化位于不同于本实施例的 一个或多个装置中。 上述实施例的模块可以合并为一个模块, 也可以进一步 拆分成多个子模块。
上述本发明实施例序号仅仅为了描述, 不代表实施例的优劣。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分步骤 可以通过程序指令相关的硬件来完成, 前述程序可以存储于一计算机可读取 存储介质中, 该程序在执行时, 执行包括上述方法实施例的步骤; 而前述的 存储介质包括: ROM, RAM,磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对其 限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通技术 人员应当理解: 其依然可以对前述实施例所记载的技术方案进行修改, 或者 对其中部分技术特征进行等同替换; 而这些修改或者替换, 并不使相应技术 方案的本质脱离本发明实施例技术方案的精神和范围。

Claims

权 利 要 求 书
1、 一种负载均衡的方法, 其特征在于, 包括:
获取客户端发送的地址解析协议 ARP请求;
选取负载均衡器集群中满足预设负载条件的负载均衡器节点, 并获取所 述满足预设负载条件的负载均衡器节点的介质访问控制 MAC地址;
向所述客户端发送 ARP应答信息, 所述 ARP应答信息中包括所述满足 预设负载条件的负载均衡器节点的 MAC地址, 以使得所述客户端收到所述 ARP应答信息后将业务请求发送到所述满足预设负载条件的负载均衡器节点 上进行均衡处理。
2、 根据权利要求 1所述的负载均衡的方法, 其特征在于, 所述选取负载 均衡器集群中满足预设负载条件的负载均衡器节点, 包括:
选取所述选取负载均衡器集群中负载最小的负载均衡器节点;
或, 选取所述选取负载均衡器集群中当前负载与额定负载比值小于预设 阀值的负载均衡器节点。
3、 根据权利要求 1所述的负载均衡的方法, 其特征在于, 还包括: 判断所述负载均衡器集群中是否存在不可用负载均衡器节点, 若是, 则 获取使用所述不可用负载均衡器节点的客户端的 MAC地址;
根据获取的所述客户端的 MAC地址; 将所述客户端的业务请求转由正 常工作的负载均衡器节点进行均衡处理。
4、 根据权利要求 3所述的负载均衡的方法, 其特征在于, 将所述客户端 的业务请求转由正常工作的负载均衡器节点进行均衡处理, 包括:
确定负载均衡器集群中满足预设负载条件的负载均衡器节点;
根据获取的所述客户端的 MAC地址; 向所述客户端发送更新通知, 所 述更新通知中包括所述满足预设负载条件的负载均衡器节点的 MAC地址, 以使得所述客户端在接收到所述更新通知后, 将业务请求发送到所述满足预 设负载条件的负载均衡器节点上进行均衡处理。
5、 根据权利要求 1所述的负载均衡的方法, 其特征在于, 还包括: 判断所述负载均衡器集群中是否存在超负荷负载均衡器节点, 若是, 则 获取超承载量的客户端的 MAC地址; 所述超承载量的客户端为使用所述超 负荷负载均衡器节点的客户端中超出所述超负荷负载均衡器承载量的客户 端
根据所述超承载量的客户端的 MAC地址; 将所述超承载量的客户端的 业务请求转由正常工作的负载均衡器进行均衡处理。
6、 根据权利要求 5所述的负载均衡的方法, 其特征在于, 将所述超承载 量的客户端的业务请求转由正常工作的负载均衡器进行均衡处理, 包括: 确定负载均衡器集群中满足预设负载条件的负载均衡器节点;
根据所述超承载量的客户端的 MAC地址; 向所述客户端发送更新通知, 所述更新通知中包括所述满足预设负载条件的负载均衡器节点的 MAC地址, 以使得所述客户端在接收到所述更新通知后, 根据所述满足预设负载条件的 负载均衡器节点的 MAC地址, 将业务请求发送到所述满足预设负载条件的 负载均衡器节点上进行均衡处理。
7、 根据权利要求 1所述的负载均衡的方法, 其特征在于, 选取负载均衡 器集群中满足预设负载条件的负载均衡器节点, 包括:
获取负载均衡器集群中各负载均衡器节点的负载状态;
根据所述负载均衡器节点的负载状态计算得到负载均衡器集群中满足预 设负载条件的负载均衡器节点。
8、 根据权利要求 3所述的负载均衡的方法, 其特征在于, 判断所述负载 均衡器集群中是否存在不可用负载均衡器节点, 包括:
当在预设的检测时间内未收到负载均衡器集群中的负载均衡器节点的心 跳检测信息时, 则判定所述负载均衡器节点为不可用负载均衡器节点。
9、 一种负载均衡管理器, 其特征在于, 包括:
第一获取模块, 用于获取客户端发送的地址解析协议 ARP请求; 第二获取模块, 用于选取负载均衡器集群中满足预设负载条件的负载均 衡器节点, 并获取所述满足预设负载条件的负载均衡器节点的介质访问控制
MAC地址;
发送模块, 用于向所述客户端发送 ARP应答信息, 所述 ARP应答信息 中包括所述满足预设负载条件的负载均衡器节点的 MAC地址, 以使得所述 客户端收到所述 ARP应答信息后将业务请求发送到所述满足预设负载条件的 负载均衡器节点上进行均衡处理。
10、 根据权利要求 9所述的负载均衡管理器, 其特征在于,
所述第二获取模块, 用于选取所述选取负载均衡器集群中负载最小的负 载均衡器节点; 或, 选取所述选取负载均衡器集群中当前负载与额定负载比 值小于预设阀值的负载均衡器节点。
11、 根据权利要求 9所述的负载均衡管理器, 其特征在于, 还包括: 第一判断模块, 用于判断所述负载均衡器集群中是否存在不可用负载均 衡器节点,
第三获取模块, 用于在所述第一判断模块判断出所述负载均衡器集群中 存在不可用负载均衡器节点时, 获取使用所述不可用负载均衡器节点的客户 端的 MAC地址;
第一均衡模块, 用于根据获取的所述客户端的 MAC地址; 将所述客户 端的业务请求转由正常工作的负载均衡器节点进行均衡处理。
12、 根据权利要求 11所述的负载均衡管理器, 其特征在于, 所述第一均 衡模块包括:
确定单元, 用于确定负载均衡器集群中满足预设负载条件的负载均衡器 节点;
更新单元, 用于根据获取的所述客户端的 MAC地址; 向所述客户端发 送更新通知, 所述更新通知中包括所述满足预设负载条件的负载均衡器节点 的 MAC地址, 以使得所述客户端在接收到所述更新通知后, 将业务请求发 送到所述满足预设负载条件的负载均衡器节点上进行均衡处理。
13、 根据权利要求 9所述的负载均衡管理器, 其特征在于, 还包括: 第二判断模块, 用于判断所述负载均衡器集群中是否存在超负荷负载均 衡器节点,
第四获取模块, 用于在所述第二判断模块判断出所述负载均衡器集群中 存在超负荷负载均衡器节点时, 获取超承载量的客户端的 MAC地址; 所述 超承载量的客户端为使用所述超负荷负载均衡器节点的客户端中超出所述超 负荷负载均衡器承载量的客户端;
第二均衡模块, 用于根据所述超承载量的客户端的 MAC地址; 将所述 超承载量的客户端的业务请求转由正常工作的负载均衡器进行均衡处理。
14、 根据权利要求 13所述的负载均衡管理器, 其特征在于, 所述第二均 衡模块包括:
确定单元, 用于确定负载均衡器集群中满足预设负载条件的负载均衡器 节点;
更新单元, 用于根据所述超承载量的客户端的 MAC地址; 向所述客户 端发送更新通知, 所述更新通知中包括所述满足预设负载条件的负载均衡器 节点的 MAC地址, 以使得所述客户端在接收到所述更新通知后, 根据所述 满足预设负载条件的负载均衡器节点的 MAC地址, 将业务请求发送到所述 满足预设负载条件的负载均衡器节点上进行均衡处理。
15、 根据权利要求 9所述的负载均衡管理器, 其特征在于,
所述第二获取模块 , 用于获取负载均衡器集群中各负载均衡器节点的负 载状态; 根据所述负载均衡器节点的负载状态计算得到负载均衡器集群中满 足预设负载条件的负载均衡器节点。
16、 根据权利要求 11所述的负载均衡管理器, 其特征在于,
所述第一判断模块, 用于当在预设的检测时间内未收到负载均衡器集群 中的负载均衡器节点的心跳检测信息时, 则判定所述负载均衡器节点为不可 用负载均衡器节点。
17、一种负载均衡系统,包括权利要求 9〜16任一所述的负载均衡管理器, 还包括负载均衡器集群,所述负载均衡器集群至少包括两个负载均衡器节点; 所述负载均衡器节点, 用于在接收到所述客户端发送的业务请求后, 对 所述业务请求进行均衡处理。
18、 根据权利要求 17所述的负载均衡系统, 其特征在于,
所述负载均衡器节点, 还用于在预设的时间间隔内向所述负载均衡管理 器发送心跳检测信息;
所述负载均衡管理器, 还用于在预设的检测时间内未收到负载均衡器集 群中的负载均衡器节点的心跳检测信息时, 则判定所述负载均衡器节点为不 可用负载均衡器节点。
19、 根据权利要求 17所述的负载均衡系统, 其特征在于,
所述负载均衡器节点,还用于向所述负载均衡管理器发送负载状态信息; 所述负载均衡管理器,还用于根据所述负载均衡器节点的负载状态信息, 确定所述负载均衡器节点的负载状态。
PCT/CN2011/073690 2010-08-25 2011-05-05 负载均衡的方法、设备和系统 WO2011140951A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010268922.2A CN102143046B (zh) 2010-08-25 2010-08-25 负载均衡的方法、设备和系统
CN201010268922.2 2010-08-25

Publications (1)

Publication Number Publication Date
WO2011140951A1 true WO2011140951A1 (zh) 2011-11-17

Family

ID=44410285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/073690 WO2011140951A1 (zh) 2010-08-25 2011-05-05 负载均衡的方法、设备和系统

Country Status (2)

Country Link
CN (1) CN102143046B (zh)
WO (1) WO2011140951A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10015049B2 (en) 2014-02-13 2018-07-03 Sap Se Configuration of network devices in a network
CN112416888A (zh) * 2020-10-16 2021-02-26 上海哔哩哔哩科技有限公司 用于分布式文件系统的动态负载均衡方法及系统
CN113326100A (zh) * 2021-06-29 2021-08-31 深信服科技股份有限公司 一种集群管理方法、装置、设备及计算机存储介质
CN114971079A (zh) * 2022-06-29 2022-08-30 中国工商银行股份有限公司 秒杀型交易处理优化方法和装置
CN116893900A (zh) * 2023-07-19 2023-10-17 合芯科技有限公司 集群计算压力负载均衡方法、系统、设备及ic设计平台

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102447624B (zh) * 2011-11-23 2014-09-17 华为数字技术(成都)有限公司 在服务器集群上实现负载均衡的方法、节点服务器及集群
CN103166979B (zh) * 2011-12-08 2016-01-27 腾讯科技(深圳)有限公司 自适应负载均衡实现方法和接入服务器
CN103188277B (zh) * 2011-12-27 2016-05-18 中国电信股份有限公司 负载能耗管理系统、方法和服务器
CN103220354A (zh) * 2013-04-18 2013-07-24 广东宜通世纪科技股份有限公司 一种实现服务器集群负载均衡的方法
CN104683254A (zh) * 2013-11-29 2015-06-03 英业达科技有限公司 路由控制方法与装置
CN105100175B (zh) * 2014-05-22 2019-01-22 北京猎豹网络科技有限公司 一种服务器集群控制方法、中心服务器及节点服务器
CN104202386B (zh) * 2014-08-27 2018-09-14 四川长虹电器股份有限公司 一种高并发量分布式文件系统及其二次负载均衡方法
CN104301414A (zh) * 2014-10-21 2015-01-21 无锡云捷科技有限公司 基于网络协议栈的服务器负载均衡方法
CN104410677B (zh) * 2014-11-18 2017-12-19 北京国双科技有限公司 服务器负载均衡方法和装置
CN104579765B (zh) * 2014-12-27 2019-02-26 北京奇虎科技有限公司 一种集群系统的容灾方法和装置
CN105159775A (zh) * 2015-08-05 2015-12-16 浪潮(北京)电子信息产业有限公司 基于负载均衡器的云计算数据中心的管理系统和管理方法
CN105939371A (zh) * 2015-11-24 2016-09-14 中国银联股份有限公司 云计算的负载均衡方法及系统
CN107229519B (zh) * 2016-03-25 2021-04-23 阿里巴巴集团控股有限公司 任务调度方法和装置
CN106375420B (zh) * 2016-08-31 2020-01-10 宝信软件(武汉)有限公司 一种基于负载均衡的服务器集群智能监控系统及方法
CN108063783A (zh) * 2016-11-08 2018-05-22 上海有云信息技术有限公司 一种负载均衡器的部署方法及装置
CN106533774A (zh) * 2016-11-28 2017-03-22 郑州云海信息技术有限公司 一种lvs系统的构建方法及lvs系统
CN108134810B (zh) * 2016-12-01 2020-01-07 中国移动通信有限公司研究院 一种确定资源调度组件的方法及其系统
CN109274986B (zh) * 2017-07-17 2021-02-12 中兴通讯股份有限公司 多中心容灾方法、系统、存储介质和计算机设备
CN107547394A (zh) * 2017-08-14 2018-01-05 新华三信息安全技术有限公司 一种负载均衡设备多活部署方法和装置
CN108881368A (zh) * 2018-04-22 2018-11-23 平安科技(深圳)有限公司 高并发业务请求处理方法、装置、计算机设备和存储介质
CN108804225B (zh) * 2018-05-24 2021-01-01 新华三云计算技术有限公司 一种虚拟机负载调控方法和装置
CN110308983B (zh) * 2019-04-19 2022-04-05 中国工商银行股份有限公司 资源负载均衡方法及系统、服务节点和客户端
CN110389831B (zh) * 2019-06-14 2021-11-02 网宿科技股份有限公司 维护负载均衡配置的方法和服务器监管设备
CN110519365B (zh) * 2019-08-26 2022-07-01 网宿科技股份有限公司 一种变更设备业务的方法和业务变更系统
CN112910942B (zh) * 2019-12-03 2024-05-24 华为技术有限公司 一种服务处理方法及相关装置
CN111478937B (zh) * 2020-02-29 2022-05-27 新华三信息安全技术有限公司 一种负载均衡方法和装置
CN112272206B (zh) * 2020-09-18 2022-12-02 苏州浪潮智能科技有限公司 一种负载均衡设备的管理方法及系统
CN113098788B (zh) * 2021-03-08 2023-03-24 杭州迪普科技股份有限公司 一种路由发布的方法及装置
CN115134424B (zh) * 2022-06-29 2024-02-02 中国工商银行股份有限公司 负载均衡方法、装置、计算机设备、存储介质和程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1531262A (zh) * 2003-03-11 2004-09-22 ��Ϊ�������޹�˾ 实现网络负载分担功能的网络通信方法
CN1728661A (zh) * 2004-07-31 2006-02-01 华为技术有限公司 在地址解析协议代理上实现备份和负载均摊的方法
CN101217483A (zh) * 2008-01-21 2008-07-09 中兴通讯股份有限公司 用于实现集群服务器内负载分担代理的方法
CN101404619A (zh) * 2008-11-17 2009-04-08 杭州华三通信技术有限公司 一种实现服务器负载均衡的方法和一种三层交换机

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101184039B (zh) * 2007-11-30 2010-06-09 北京大学 一种以太网负载均衡的方法
CN101459659B (zh) * 2007-12-11 2011-10-05 华为技术有限公司 一种地址解析协议报文处理方法及通讯系统以及网元

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1531262A (zh) * 2003-03-11 2004-09-22 ��Ϊ�������޹�˾ 实现网络负载分担功能的网络通信方法
CN1728661A (zh) * 2004-07-31 2006-02-01 华为技术有限公司 在地址解析协议代理上实现备份和负载均摊的方法
CN101217483A (zh) * 2008-01-21 2008-07-09 中兴通讯股份有限公司 用于实现集群服务器内负载分担代理的方法
CN101404619A (zh) * 2008-11-17 2009-04-08 杭州华三通信技术有限公司 一种实现服务器负载均衡的方法和一种三层交换机

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10015049B2 (en) 2014-02-13 2018-07-03 Sap Se Configuration of network devices in a network
CN112416888A (zh) * 2020-10-16 2021-02-26 上海哔哩哔哩科技有限公司 用于分布式文件系统的动态负载均衡方法及系统
CN112416888B (zh) * 2020-10-16 2024-03-12 上海哔哩哔哩科技有限公司 用于分布式文件系统的动态负载均衡方法及系统
CN113326100A (zh) * 2021-06-29 2021-08-31 深信服科技股份有限公司 一种集群管理方法、装置、设备及计算机存储介质
CN113326100B (zh) * 2021-06-29 2024-04-09 深信服科技股份有限公司 一种集群管理方法、装置、设备及计算机存储介质
CN114971079A (zh) * 2022-06-29 2022-08-30 中国工商银行股份有限公司 秒杀型交易处理优化方法和装置
CN114971079B (zh) * 2022-06-29 2024-05-28 中国工商银行股份有限公司 秒杀型交易处理优化方法和装置
CN116893900A (zh) * 2023-07-19 2023-10-17 合芯科技有限公司 集群计算压力负载均衡方法、系统、设备及ic设计平台

Also Published As

Publication number Publication date
CN102143046B (zh) 2015-03-11
CN102143046A (zh) 2011-08-03

Similar Documents

Publication Publication Date Title
WO2011140951A1 (zh) 负载均衡的方法、设备和系统
US10257265B2 (en) Redundancy network protocol system
KR101523457B1 (ko) 지오-리던던트 게이트에서 세션 복원을 위한 시스템 및 방법
US7609619B2 (en) Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution
US9100329B1 (en) Providing non-interrupt failover using a link aggregation mechanism
JP3452466B2 (ja) 着信メッセージを経路指定する方法及びシステム
JP4594422B2 (ja) クラスター化ノードを権限のあるドメインネームサーバーとして使用してアクティブ負荷のバランスをとるためのシステム、ネットワーク装置、方法、及びコンピュータプログラム製品
EP3874695A1 (en) Methods, systems, and computer readable media for providing a service proxy function in a telecommunications network core using a service-based architecture
JP5873597B2 (ja) 仮想ファブリックリンク障害復旧のためのシステムおよび方法
WO2023280166A1 (zh) 跨区域通信方法及设备、计算机可读存储介质
JP2022502926A (ja) Ue移行方法、装置、システム、および記憶媒体
WO2011157151A2 (zh) 实现容灾备份的方法、设备及系统
WO2017050254A1 (zh) 热备方法、装置及系统
KR101691759B1 (ko) 가상 섀시 시스템 제어 프로토콜
WO2018090386A1 (zh) 一种nf组件异常的处理方法、设备及系统
US20220131935A1 (en) Service Unit Switching Method, System, and Device
WO2020057445A1 (zh) 一种通信系统、方法及装置
CN109688006B (zh) 支持目标集群动态探测的高性能网络日志消息分发方法
US20140258551A1 (en) Method for Implementing Session Border Controller Pool, and Session Border Controller
WO2013146808A1 (ja) コンピュータシステム、及び通信経路変更方法
KR101665276B1 (ko) 가상 섀시 시스템에서 패스 스루 모드를 위한 시스템 및 방법
CN100563263C (zh) 在网络存储业务中实现系统高可用性的方法和系统
Alasadi et al. SSED: Servers under software-defined network architectures to eliminate discovery messages
WO2014146541A1 (zh) Cdn与网络融合系统、调度模块选定方法及计算机存储介质
WO2012065440A1 (zh) 虚拟路由器冗余协议备份组中设备优先级实现方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11780172

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11780172

Country of ref document: EP

Kind code of ref document: A1