WO2018077238A1 - Système et procédé d'équilibrage de charge basé sur un commutateur - Google Patents

Système et procédé d'équilibrage de charge basé sur un commutateur Download PDF

Info

Publication number
WO2018077238A1
WO2018077238A1 PCT/CN2017/108046 CN2017108046W WO2018077238A1 WO 2018077238 A1 WO2018077238 A1 WO 2018077238A1 CN 2017108046 W CN2017108046 W CN 2017108046W WO 2018077238 A1 WO2018077238 A1 WO 2018077238A1
Authority
WO
WIPO (PCT)
Prior art keywords
load balancing
switch
entry
communication link
end scheduling
Prior art date
Application number
PCT/CN2017/108046
Other languages
English (en)
Chinese (zh)
Inventor
苗辉
李骏逸
Original Assignee
贵州白山云科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 贵州白山云科技有限公司 filed Critical 贵州白山云科技有限公司
Publication of WO2018077238A1 publication Critical patent/WO2018077238A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a switch-based load balancing system and method.
  • LinuxVirtualServer is a virtual server cluster system that uses IP load balancing technology and load balancing based on content request distribution technology.
  • IP load balancing technology is the most efficient in the implementation of load scheduler.
  • NAT Network Address Translation
  • VS/NAT Virtual Server via Network Address Translation
  • VS/NAT Virtual Server via IP Tunneling
  • VS/TUN Virtual Server via IP Tunneling
  • LVS cluster services using IP load balancing technology provide services in a layered manner.
  • Load scheduling layer It is located at the forefront of the entire cluster system. It consists of two (NAT, DR, or TUN mode) or two or more (FullNat mode) load schedulers (that is, front-end scheduling servers, or simply schedulers).
  • the back-end application service layer actually consists of a group of machines running application services.
  • Backend application service The hardware performance of the server (referred to as Realserver) does not need to be completely unified, and can be different.
  • the front-end scheduling server can artificially define a scheduling mechanism to schedule the back-end application server.
  • the working principle is as follows: When there is a large amount of data to access an application service (WWW service, DNS service), the data first passes through the load scheduler.
  • the load scheduler sends data to multiple back-end application servers in a targeted manner through various scheduling algorithms, such as a polling scheduling algorithm, a weighted scheduling algorithm, and an ECP algorithm, so that the back-end application server provides services more efficiently and in a balanced manner.
  • the load scheduler can also discover through the detection mechanism (such as Keep alive) and then cull the backend application server that cannot provide the service.
  • the existing LVS cluster service can effectively provide stable and reliable services, but it also brings the following problems:
  • the front-end scheduling servers in the LVS cluster service system are both active and standby architectures, that is, only one server can provide services, and the remaining one or more scheduling servers are standby, and the primary and backup architectures are connected. It is also more complicated.
  • the primary scheduling server is abnormal and cannot provide services, the primary and secondary servers need to have various mechanisms to discover that the primary scheduling server is down, and the standby scheduling server takes over the role of the primary scheduling server, so that the roles of the primary and secondary scheduling servers cannot fully utilize resources.
  • the scheduling server is a long-waiting role. It cannot actively provide servers, resulting in wasted resources.
  • the front-end scheduling server provides only one primary scheduling server for service. It cannot resist malicious massive access, and the problem of excessive load of the primary scheduling server may occur. The situation where the main dispatch server cannot provide external services occurs.
  • the load scheduling server of the active/standby architecture or a primary multiple standby architecture is relatively poorly scalable, and cannot provide two or two scheduling servers to provide services at the same time.
  • the existing LVS cluster service system has problems such as waste of resources, poor anti-interference ability, and poor scalability.
  • the embodiment of the invention provides a switch-based load balancing system and method, which solves the problems of resource waste, poor anti-interference ability and poor scalability of the existing LVS cluster service system.
  • the switch-based load balancing system of the present invention includes: a switch, N front-end scheduling servers, and M application servers, where N and M are integers greater than one;
  • the switch is configured to receive a request data packet sent by the user end, and determine, according to a preset equalization condition, the same next hop address allocated to the request data packet, according to the determined next hop address, from the equivalent And selecting, by the routing ECMP routing entry, a path entry corresponding to the next hop address, and sending the request data packet to the corresponding front end scheduling server according to the selected communication link corresponding to the path entry;
  • the front-end scheduling server is configured to send the received request data packet to a corresponding application server;
  • the application server is configured to respond to the request data packet sent by the front-end scheduling server, and return a corresponding response result to the user end;
  • the preset equalization condition includes: the hash value is the same, the total number of the next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the above load balancing system may further include:
  • the switch is further configured to acquire link state information of a communication link with each of the front-end scheduling servers before receiving the request data packet, and update the ECMP routing table according to the link state information.
  • the front-end scheduling server is further configured to receive the link state information.
  • the above load balancing system may further include:
  • the switch is further configured to perform modulo calculation on each path entry in the ECMP routing entry, and determine, according to the updated link state information, whether there is a communication link with the front-end scheduling server. And increasing or decreasing the change, if it is determined that the communication link between the strip and the front-end scheduling server is disconnected, the mode entry of the path entry corresponding to the other communication link in the ECMP routing entry is kept unchanged, and Re-calculating the traffic of the disconnected communication link and assigning it to other communication links; if it is determined that the path entry corresponding to the communication link in the ECMP routing entry is increased, the active communication link will be Part of the traffic is allocated to the new communication link.
  • the above load balancing system may further include:
  • the hash value set in the ECMP routing entry of the preset equalization condition includes a hash value of an arbitrary network quaternion pair; each slot in the ECMP routing entry of the preset equalization condition uniquely corresponds to one Next hop address.
  • the above load balancing system may further include:
  • the N front-end scheduling servers include two or more front-end scheduling servers configured to form a master-slave front-end scheduling server architecture
  • the load balancing method based on the above load balancing system in the present invention includes:
  • the switch receives the request packet sent by the UE, and determines the same next hop address allocated to the request packet according to the preset equalization condition;
  • the preset equalization condition includes: the hash value is the same, the total number of the next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the method further includes: before receiving the request data packet sent by the user end, the method further includes:
  • the above method may further include:
  • the present invention also includes another load balancing method based on the above load balancing system, including:
  • the front-end scheduling server receives the request data packet sent by the switch from the user end, where the request data packet is determined by the switch to determine the same next hop address allocated by the request data packet according to a preset equalization condition, according to determining The next hop address, selecting a path entry corresponding to the next hop address from the equal-cost route ECMP routing entry, and selecting the request packet according to the selected communication link corresponding to the path entry
  • the road is sent to the corresponding front-end scheduling server;
  • the preset equalization condition includes: the hash value is the same, the total number of the next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the present invention also includes another load balancing method based on the above load balancing system, including:
  • the application server responds to the request data packet sent by the front end scheduling server from the user end, and returns a corresponding response result to the user end;
  • the request data packet is the same next hop address that is determined by the switch to be received by the switch from the user end according to a preset equalization condition, and according to the determined next hop address. After the path entry corresponding to the next hop address is selected from the ECMP routing entry of the equal-cost route, the request packet is sent to the corresponding communication link according to the selected path entry. Obtained by the front-end scheduling server;
  • the preset equalization condition includes: the hash value is the same, the total number of the next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the embodiment of the present invention adopts a switch-based load balancing architecture, and determines the same next hop address allocated by the request packet from the UE according to the preset equalization condition, and the request packet received by the switch is determined according to the same next hop.
  • the address is sent to the front-end scheduling server and then forwarded to the application server, thereby implementing load balancing.
  • the number of the front-end scheduling server and the application server are both greater than one.
  • Figure 1 is a conventional load balancing system using LVS cluster services
  • FIG. 2 is a schematic structural diagram of a switch-based load balancing system according to Embodiment 1 of the present invention
  • FIG. 3 is a specific working flowchart of a load balancing system according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic flowchart of a load balancing method according to Embodiment 2 of the present invention.
  • FIG. 6 is a schematic flowchart of a load balancing method according to Embodiment 3 of the present invention.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • a first embodiment of the present invention provides a switch-based load balancing system.
  • FIG. 2 it is a schematic structural diagram of a switch-based load balancing system according to Embodiment 1 of the present invention.
  • the load balancing system includes: switch 21 and N Front-end scheduling server 22, M-application server 23, where N and M Both are integers greater than one.
  • the switch 21 is configured to receive the request packet sent by the UE, and determine the same next hop address that is requested by the request packet according to the preset equalization condition, and select an ECMP routing entry from the equal-cost route according to the determined next hop address. Selecting a path entry corresponding to the next hop address, and transmitting the request packet to the corresponding front-end scheduling server 22 according to the communication link corresponding to the selected path entry, so that the front-end scheduling server 22 will receive the requested data.
  • the packet is sent to the corresponding application server, and the response result of the request packet is returned to the client via the application server 23, wherein the preset equalization condition is: the hash value calculated by the chip is the same, and the next path entry in the ECMP routing entry is The total number of hops is the same, and the exit information of the next hop stored by the same offset is the same.
  • the front-end scheduling server 22 can be configured to send the received request data packet to the corresponding application server 23;
  • the application server 23 is configured to respond to the request packet sent by the front-end scheduling server 22, and return the corresponding response result to the client.
  • the switch receives the request packet sent by the UE, and this operation is also used to resist the SYN Flood attack.
  • ECMP is an IP routing equivalent multipath, which is based on the calculation result of the OSI seven-layer model network layer routing layer.
  • the significance of equivalent multipath is that in a network environment where multiple different links arrive at the same destination address, if traditional routing technology is used, packets destined for the destination address can only use one of the links, and other chains.
  • the path is in the backup state or inactive state, and the path switching between the paths in the dynamic routing environment takes a certain time; while the ECMP protocol can use multiple links simultaneously in the network environment, which not only increases the transmission bandwidth, but also has no delay. Data transmission of the failed link is backed up without packet loss.
  • the biggest feature of ECMP is to achieve the purpose of multipath load balancing and link backup under equivalent conditions. In practical applications, the number of members configuring ECMP may be determined according to the number of front-end scheduling servers 22 in the cluster of the front-end scheduling server 22.
  • the load balancing system of the embodiment of the present invention is not limited to the architecture of two active and standby front-end scheduling servers, and can scale two or more main primary architecture front-end scheduling servers horizontally, thereby avoiding the active and standby front-end scheduling servers in failover.
  • a complex contact mechanism and a long-waiting role of the standby front-end scheduling server in the active and standby front-end scheduling servers which are not limited to two main mainframe front-end scheduling servers, improve resource utilization efficiency, and when malicious attacks occur,
  • the front-end scheduling server in the main mainframe can work at the same time, and resists the attack traffic, which enhances the disaster recovery capability.
  • the switch since the system receives the access request from the client, the switch also has the effect of resisting large-scale SYN Flood attacks. .
  • the User layer is accessed by ordinary users. Thousands of access users distributed in different geographical locations are user data packets with a large number of access application requests, and access to the switch-based server cluster through Internet access.
  • the system architecture of the load balancing system based on the switch in the embodiment of the present invention is divided into three layers: a switch layer, a front end server layer, and a Realserver layer (application server).
  • the switch layer is the network device of the Internet data center. It performs L4 load balancing at the switch layer, controls the 4-layer consistency hash, and distributes the traffic to the child nodes of the front-end server layer.
  • the ECMP (Equal Cost) of the switch is required.
  • the MultiPath function can perform hash calculation based on the quintuple of the data stream, and combines the preset equalization conditions to determine the next hop address, thereby balancing the traffic between the links.
  • the front-end server layer runs an equal-cost routing protocol, configures the same loopback address for each application server, configures a static route to the address, and points the next hop to the physical port address of the different application server.
  • the front-end server layer includes at least two front-end scheduling servers, and at least two front-end scheduling servers have the same service address; the IP addresses of each front-end server are unique; and multiple individual scheduling servers are serialized by ECMP, forming a
  • the clustering mode of the new architecture facilitates the horizontal expansion of the server cluster, and solves the problem that the front-end scheduling server of the active/standby architecture cannot cope with excessive load, waste of resources, and poor scalability.
  • the RealServer layer that is, the application server layer is a normal web server, a log server, an application server, etc., and may be collectively referred to as an application server cluster RS.
  • the switch may be further configured to: before receiving the request data packet sent by the user end, acquire link state information of the communication link with each front-end scheduling server, and update the ECMP routing entry according to the link state information. Link information corresponding to each path entry, and broadcasting updated link state information to each front-end scheduling server;
  • the front-end scheduling server can also be configured to send and receive link state information.
  • a routing protocol is run on the switch to receive or broadcast link state information with the front-end scheduling server. And specify that the switch can receive and broadcast link state information.
  • the switch can enable one or more virtual interfaces to interact with the front-end scheduling server through the virtual interface.
  • a front-end scheduling server can only update and interact with link state information of a specified switch. After the specified switch updates the link state information, it will release the updated link state information to other front-end scheduling servers.
  • the other front-end scheduling servers receive the updated link state information to maintain the routing information of the switch. Consistent purpose.
  • the request packet of the client arrives at the computer room switch of the data center via the Internet;
  • the switch checks the destination service address of the data packet, and checks the ECMP routing entry, determines the same next hop address allocated by the request packet according to the preset equalization condition, and obtains the equivalent according to the determined next hop address.
  • the route entry corresponding to the next hop address is selected in the route ECMP routing entry, and the next hop exit corresponding to the path entry is obtained, and the next hop exit is the network port of the front-end scheduling server, and the network port has the sending and sending Receiving function; then sending the request packet to the front-end scheduling server corresponding to the selected equivalent path; when the valid path is not found in the ECMP routing entry, the request packet is directly discarded.
  • an example illustrates the calculation process of the same next hop address: for example, the calculation process of determining the same next hop address of the request packet allocation according to the preset equalization condition may be as shown in FIG. 4.
  • the routing entry of the communication link L3 has two NHs of 10.10.10.0/24 and 20.20.20.0/24, respectively, which are calculated by the hash as slot 3, and the slot 3 is uniquely assigned the next hop address of 1.1. 1.1. Therefore, after adding the hash offset calculation, it is determined that both addresses have the same next hop address NH: 1.1.1.1 is the next hop exit address.
  • the current dispatching server After receiving the request data packet, the current dispatching server sends the corresponding request data packet to the application server cluster (RS) of the back end according to a preset equalization manner.
  • RS application server cluster
  • the pre-set equalization mode can be understood as actually depending on the definition of the ECMP slot. For example, 64 slots are equally divided into 8 groups, and the probability that each group is allocated to the traffic is equal. If 64 slots are equally divided into 7 groups, there must be at least one set of traffic distribution probability different from other groups. The embodiments of the invention do not describe this.
  • the front-end scheduling server layer can be divided into L4/L7 outlets, identified by IP/port, and connected to the Realserver layer node.
  • the dual-group pair (source ip and destination ip) participate in the hash calculation
  • L7 increases the source port and the destination port to participate in the hash calculation, and then selects the corresponding exit, which is not described in the embodiment of the present invention.
  • the application server cluster responds to the request and sends the response packet to the requesting client.
  • the load balancing of the switch is better than that of the LVS, and various complicated active and standby modes are avoided during the failover.
  • the contact mechanism improves resource utilization and scalability, and also solves the problem of excessive load of the traditional active/standby LVS architecture service cluster.
  • the switch since the system receives the access request from the client, the switch also has the capability. Resist the effects of large-scale SYN Flood attacks.
  • the switch may be configured to perform modulo calculation on each path entry in the ECMP routing entry, and determine, according to the updated link state information, whether there is a change or decrease of the communication link with the front-end scheduling server. After the communication link with the front-end scheduling server is disconnected, the modulo of the path entry corresponding to the other communication links in the ECMP routing entry is kept unchanged, and the traffic of the disconnected communication link is hashed again. The calculation is allocated to other communication links; if it is determined that the number of path entries corresponding to the communication link in the ECMP routing entry is increased, part of the traffic on the active communication link is allocated to the newly added communication link.
  • the load balancing system of the embodiment of the invention can implement the increase and decrease of the front-end scheduling server, and can achieve continuous load balancing of other communication links without interruption when the front-end scheduling server has an increase/decrease transition.
  • the switch is responsible for the problem of consistent hash, strong flexibility, enhanced packet forwarding capability for the entire architecture, and greater advantages in traversal.
  • the ECMP and the TOR are interconnected, and the data flows are distributed to the member machines of the load balancing cluster through the ECMP on the TOR device (that is, the front-end scheduling in this embodiment). server).
  • the existing dynamic routing protocol generates an ECMP routing entry between the TOR and the load balancing cluster. After a link in the ECMP routing entry fails due to a fault, the dynamic routing protocol reconverges. Traffic from the TOR device to the load balancing cluster is rebalanced, which This will upset the session state that was originally maintained on the load balancing cluster member machine. The entire cluster needs to rebuild the session, causing some sessions to be interrupted.
  • the load balancing system of the present invention can perform a consistent hash at the switch layer, and solves the problem that the session will be completely disrupted after a server is down.
  • the consistency hash is based on the number of existing ECMP entries, and when one communication link is disconnected, the other communication link is unchanged, and the communication link to be disconnected is changed. The traffic is re-hashed to other communication links, thus maintaining the TCP connection of other communication links unchanged. That is to say, the consistent hash design in the load balancing system of the embodiment of the present invention is different from the normal ECMP only for the UDP link.
  • the architecture of the load balancing system proposed in the embodiment of the present invention can also be applied to the TCP link.
  • the embodiment of the present invention adopts a switch-based load balancing architecture, and determines the same next hop address allocated by the request packet from the UE according to the preset equalization condition, and the request packet received by the switch is determined according to the same next hop.
  • the address is sent to the front-end scheduling server and then forwarded to the application server, thereby implementing load balancing.
  • the number of the front-end scheduling server and the application server are both greater than one.
  • the load balancing system of the embodiment of the present invention since the switch forwards to pure hardware forwarding and the ports are line speeds, the load balancing of the switch is better than that of the LVS, and various complicated active and standby modes are avoided during the failover.
  • the contact mechanism improves resource utilization and scalability, and also solves the problem of excessive load on the traditional active/standby LVS architecture service cluster.
  • the switch is responsible for the problem of consistent hash, strong flexibility, enhanced packet forwarding capability for the entire architecture, and greater advantages in traversal.
  • the switch since the system receives the access request from the client, the switch also It has the effect of being able to withstand large-scale SYN Flood attacks.
  • the second embodiment of the present invention provides a load balancing method based on a load balancing system.
  • the execution entity is a switch.
  • the schematic diagram of the process is shown in Figure 5. The method includes:
  • Step 501 The switch receives the request data packet sent by the user end, and determines the same next hop address that is requested by the data packet according to the preset equalization condition.
  • the preset equalization condition is: the hash value calculated by the chip is the same, the total number of next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the switch receives the request packet sent by the UE, and this step is also used to resist the SYN Flood attack.
  • Step 502 Select, according to the determined next hop address, a path entry corresponding to the next hop address from the ECMP routing entry of the equal-cost route.
  • Step 503 Send the request data packet to the corresponding front-end scheduling server according to the communication link corresponding to the selected path entry, so that the front-end scheduling server sends the received request data packet to the corresponding application server, and the application server The response result of the request packet is returned to the client.
  • the method may further include steps A1-A2 before receiving the request packet sent by the client:
  • Step A1 Acquire link state information of a communication link with each front-end scheduling server.
  • Step A2 Update the link information corresponding to each path entry in the ECMP routing entry according to the link state information, and broadcast the updated link state information to each front-end scheduling server.
  • the method may also include steps B1-B2:
  • Step B1 Perform modulo calculation on each path entry in the ECMP routing entry.
  • Step B2 Determine, according to the updated link state information, whether there is a change or decrease of the communication link with the front-end scheduling server, and if it is determined that the communication link between any one of the front-end scheduling servers is disconnected, the ECMP is maintained.
  • the path entry corresponding to the other communication link in the routing entry is unchanged, and the traffic of the disconnected communication link is hashed again and allocated to other communication links; if the ECMP routing entry is determined When the number of path entries corresponding to the communication link increases, part of the traffic on the active communication link is allocated to the newly added communication link.
  • the embodiment of the present invention adopts a switch-based load balancing architecture, and determines the same next hop address allocated by the request packet from the UE according to the preset equalization condition, and the request packet received by the switch is determined according to the same next hop.
  • the address is sent to the front-end scheduling server and then forwarded to the application server, thereby implementing load balancing.
  • the number of the front-end scheduling server and the application server are both greater than one.
  • the load balancing system of the embodiment of the present invention since the switch forwards to pure hardware forwarding and the ports are line speeds, the load balancing of the switch is better than that of the LVS, and various complicated active and standby modes are avoided during the failover.
  • the contact mechanism improves resource utilization and scalability, and also solves the problem of excessive load on the traditional active/standby LVS architecture service cluster.
  • the switch is responsible for the problem of consistent hash, strong flexibility, enhanced packet forwarding capability for the entire architecture, and greater advantages in traversal.
  • the switch since the system receives the access request from the client, the switch also Have the ability Resist the effects of large-scale SYN Flood attacks.
  • the third embodiment of the present invention provides another load balancing method based on the load balancing system.
  • the execution entity is the front-end scheduling server.
  • the process diagram is shown in Figure 6. The method includes:
  • Step 601 The front-end scheduling server receives the request data packet sent by the switch from the user end, where the request data packet is determined by the switch according to the preset equalization condition to determine the same next hop address allocated by the request data packet, according to the determined next hop address. And selecting a path entry corresponding to the next hop address from the ECMP routing entry of the equal-cost route, and sending the request packet to the corresponding front-end scheduling server according to the communication link corresponding to the selected path entry.
  • the preset equalization condition is: the hash value calculated by the chip is the same, the total number of next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the switch receives the request packet sent by the UE, and this step is also used to resist the SYN Flood attack.
  • Step 602 Send the request data packet to the corresponding application server, so that the response result of the request data packet is returned by the application server to the user end.
  • the method may further include:
  • the front-end scheduling server sends and receives link state information with the switch.
  • the embodiment of the present invention adopts a switch-based load balancing architecture, and determines the same next hop address allocated by the request packet from the UE according to the preset equalization condition, and the request packet received by the switch is determined according to the same next hop.
  • the address is sent to the front-end scheduling server and then forwarded to the application server, thereby implementing load balancing.
  • the number of the front-end scheduling server and the application server are both greater than one.
  • the load balancing system of the embodiment of the present invention since the switch forwards to pure hardware forwarding and the ports are line speeds, the load balancing of the switch is better than that of the LVS, and various complicated active and standby modes are avoided during the failover.
  • the contact mechanism improves resource utilization and scalability, and also solves the problem of excessive load on the traditional active/standby LVS architecture service cluster.
  • the switch is responsible for the problem of consistent hash, strong flexibility, enhanced packet forwarding capability for the entire architecture, and greater advantages in traversal.
  • the switch since the system receives the access request from the client, the switch also Have the ability Resist the effects of large-scale SYN Flood attacks.
  • the fourth embodiment of the present invention provides another load balancing method based on the load balancing system.
  • the execution subject is an application server.
  • the method includes:
  • the application server responds to the request packet sent by the front-end scheduling server from the client, and returns the corresponding response result to the client.
  • the request packet is a same next hop address that the switch receives from the client according to a preset equalization condition, and routes the ECMP route from the equal-cost route according to the determined next hop address. After the path entry corresponding to the next hop address is selected in the entry, the request packet is sent to the corresponding front-end scheduling server according to the communication link corresponding to the selected path entry.
  • Preset equalization condition The hash value calculated by the chip is the same, the total number of next hops of each path entry in the ECMP routing entry is the same, and the exit information of the next hop stored by the same offset is the same.
  • the switch receives the request packet sent by the UE, and this step is also used to resist the SYN Flood attack.
  • the embodiment of the present invention adopts a switch-based load balancing architecture, and determines the same next hop address allocated by the request packet from the UE according to the preset equalization condition, and the request packet received by the switch is determined according to the same next hop.
  • the address is sent to the front-end scheduling server and then forwarded to the application server, thereby implementing load balancing.
  • the number of the front-end scheduling server and the application server are both greater than one.
  • the load balancing system of the embodiment of the present invention since the switch forwards to pure hardware forwarding and the ports are line speeds, the load balancing of the switch is better than that of the LVS, and various complicated active and standby modes are avoided during the failover.
  • the contact mechanism improves resource utilization and scalability, and also solves the problem of excessive load on the traditional active/standby LVS architecture service cluster.
  • the switch is responsible for the problem of consistent hash, strong flexibility, enhanced packet forwarding capability for the entire architecture, and greater advantages in traversal.
  • the switch since the system receives the access request from the client, the switch also It has the effect of being able to withstand large-scale SYN Flood attacks.
  • computer storage medium includes volatile and nonvolatile, implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules or other data. Sex, removable and non-removable media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical disc storage, magnetic cartridge, magnetic tape, magnetic disk storage or other magnetic storage device, or may Any other medium used to store the desired information and that can be accessed by the computer.
  • communication media typically includes computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and can include any information delivery media. .
  • the embodiment of the invention effectively improves the load balancing performance, and avoids the complicated active/standby contact mechanism during the failover, improves the resource utilization and the scalability, and also solves the excessive load of the traditional active/standby LVS architecture service cluster. problem.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

Selon des modes de réalisation, la présente invention concerne un système et un procédé d'équilibrage de charge basé sur un commutateur. Dans le système, une architecture d'équilibrage de charge basé sur un commutateur est utilisée, la même adresse de bond suivant allouée pour demander des paquets de données étant déterminée selon des conditions d'équilibrage préétablies, et les paquets de données de demande reçus par un commutateur étant envoyés à des serveurs de planification frontaux en fonction de la même adresse de saut suivant déterminée, et est transmise à des serveurs d'applications, de telle sorte que l'équilibrage de charge est mis en œuvre, le nombre de serveurs de planification frontaux et le nombre de serveurs d'applications étant supérieurs à 1. Dans la présente invention, un transfert de commutateur est un transfert de matériel pur, des ports effectuent un transfert à vitesse filaire, et par conséquent, un équilibrage de charge basé sur un commutateur présente une meilleure performance par comparaison avec un LVS, et un mécanisme de contact principal et de secours compliqué pendant un basculement est évité, de telle sorte que le taux d'utilisation de ressources est amélioré, une forte extensibilité est fournie, et le problème de charge excessive d'un groupe de services d'architecture LVS primaire et de secours classique est résolu.
PCT/CN2017/108046 2016-10-27 2017-10-27 Système et procédé d'équilibrage de charge basé sur un commutateur WO2018077238A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610948570.2A CN107995123B (zh) 2016-10-27 2016-10-27 一种基于交换机的负载均衡系统及方法
CN201610948570.2 2016-10-27

Publications (1)

Publication Number Publication Date
WO2018077238A1 true WO2018077238A1 (fr) 2018-05-03

Family

ID=62024827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/108046 WO2018077238A1 (fr) 2016-10-27 2017-10-27 Système et procédé d'équilibrage de charge basé sur un commutateur

Country Status (2)

Country Link
CN (2) CN111600806B (fr)
WO (1) WO2018077238A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110012065A (zh) * 2019-02-25 2019-07-12 贵州格物数据有限公司 一种基于虚拟技术的资源调度平台及方法
CN110309031A (zh) * 2019-07-04 2019-10-08 深圳市瑞驰信息技术有限公司 一种负载均衡微计算集群架构
CN110661904A (zh) * 2019-10-25 2020-01-07 浪潮云信息技术有限公司 一种实现源网络地址转换网关水平扩展的方法
CN111756830A (zh) * 2020-06-22 2020-10-09 浪潮云信息技术股份公司 公有云网络的内网负载均衡实现方法
CN112653620A (zh) * 2020-12-21 2021-04-13 杭州迪普科技股份有限公司 路由处理方法、装置、设备及计算机可读存储介质
CN112817752A (zh) * 2021-01-21 2021-05-18 西安交通大学 一种分布式数据库动态负载均衡方法
CN113377510A (zh) * 2021-06-08 2021-09-10 武汉理工大学 无服务器计算环境中基于一致性哈希的缓存包调度优化算法
CN113542143A (zh) * 2020-04-14 2021-10-22 中国移动通信集团浙江有限公司 Cdn节点流量调度方法、装置、计算设备及计算机存储介质
CN113691608A (zh) * 2021-08-20 2021-11-23 京东科技信息技术有限公司 流量分发的方法、装置、电子设备及介质
CN113709054A (zh) * 2021-07-16 2021-11-26 济南浪潮数据技术有限公司 一种基于keepalived的LVS系统部署调节方法、装置及系统
CN114268630A (zh) * 2021-12-14 2022-04-01 浪潮思科网络科技有限公司 基于静态arp表项实现随机负载均衡访问方法、装置及设备
US11425030B2 (en) 2020-10-08 2022-08-23 Cisco Technology, Inc. Equal cost multi-path (ECMP) failover within an automated system (AS)
CN116155910A (zh) * 2023-03-29 2023-05-23 新华三技术有限公司 一种设备管理方法及装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099115B (zh) * 2019-04-30 2022-02-22 湖南麒麟信安科技股份有限公司 一种透明调度转发的负载均衡方法及系统
CN110225137B (zh) * 2019-06-24 2022-11-11 北京达佳互联信息技术有限公司 业务请求处理方法、系统、服务器及存储介质
CN110971679B (zh) * 2019-11-21 2023-04-07 厦门亿联网络技术股份有限公司 一种会议业务调度方法及装置
CN111464362B (zh) * 2020-04-08 2023-04-07 上海晨驭信息科技有限公司 一种用于服务器一主多备自动切换的系统
CN111988221B (zh) * 2020-08-31 2022-09-13 网易(杭州)网络有限公司 数据传输方法、数据传输装置、存储介质与电子设备
CN112104513B (zh) * 2020-11-02 2021-02-12 武汉中科通达高新技术股份有限公司 可视化软件负载方法、装置、设备及存储介质
CN112751944A (zh) * 2021-02-18 2021-05-04 南京宏锐祺程信息科技有限公司 一种流数据加速方法、服务器和负载均衡设备
CN113452614B (zh) * 2021-06-25 2022-06-21 新华三信息安全技术有限公司 一种报文处理方法和装置
CN114079636A (zh) * 2021-10-25 2022-02-22 深信服科技股份有限公司 一种流量处理方法及交换机、软负载设备、存储介质
CN114465984B (zh) * 2022-04-12 2022-08-23 浙江中控研究院有限公司 基于传输路径的地址分配方法、系统、设备和计算机可读存储介质
CN118509376B (zh) * 2024-07-19 2024-09-27 天翼云科技有限公司 一种服务提供商的多流量路径负载均衡方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572667A (zh) * 2009-05-22 2009-11-04 中兴通讯股份有限公司 一种ip路由等价多路径的实现方法及装置
CN104144120A (zh) * 2013-05-07 2014-11-12 杭州华三通信技术有限公司 转发信息配置方法及装置
CN104301417A (zh) * 2014-10-22 2015-01-21 网宿科技股份有限公司 一种负载均衡方法及装置
CN105515979A (zh) * 2015-12-29 2016-04-20 新浪网技术(中国)有限公司 开放式最短路径优先ospf跨网均衡转发方法及系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101043428B (zh) * 2006-05-30 2012-05-02 华为技术有限公司 一种路由转发的方法和系统
CN103166870B (zh) * 2011-12-13 2017-02-08 百度在线网络技术(北京)有限公司 负载均衡集群系统及采用其提供服务的方法
US9049137B1 (en) * 2012-08-06 2015-06-02 Google Inc. Hash based ECMP load balancing with non-power-of-2 port group sizes
CN103078804B (zh) * 2012-12-28 2015-07-22 福建星网锐捷网络有限公司 等价多路径表处理方法、装置及网络设备
CN104796347A (zh) * 2014-01-20 2015-07-22 中兴通讯股份有限公司 一种负载均衡方法、装置和系统
US9246812B2 (en) * 2014-04-17 2016-01-26 Alcatel Lucent Method and apparatus for selecting a next HOP
CN104301246A (zh) * 2014-10-27 2015-01-21 盛科网络(苏州)有限公司 基于sdn的大流负载均衡转发方法及装置
WO2016106522A1 (fr) * 2014-12-29 2016-07-07 Nokia Technologies Oy Procédé et appareil d'équilibrage de charge de serveur
CN104539552A (zh) * 2015-01-12 2015-04-22 盛科网络(苏州)有限公司 一种基于网络芯片的动态ecmp的实现方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101572667A (zh) * 2009-05-22 2009-11-04 中兴通讯股份有限公司 一种ip路由等价多路径的实现方法及装置
CN104144120A (zh) * 2013-05-07 2014-11-12 杭州华三通信技术有限公司 转发信息配置方法及装置
CN104301417A (zh) * 2014-10-22 2015-01-21 网宿科技股份有限公司 一种负载均衡方法及装置
CN105515979A (zh) * 2015-12-29 2016-04-20 新浪网技术(中国)有限公司 开放式最短路径优先ospf跨网均衡转发方法及系统

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110012065A (zh) * 2019-02-25 2019-07-12 贵州格物数据有限公司 一种基于虚拟技术的资源调度平台及方法
CN110309031A (zh) * 2019-07-04 2019-10-08 深圳市瑞驰信息技术有限公司 一种负载均衡微计算集群架构
CN110309031B (zh) * 2019-07-04 2023-07-28 深圳市臂云科技有限公司 一种负载均衡微计算集群架构
CN110661904A (zh) * 2019-10-25 2020-01-07 浪潮云信息技术有限公司 一种实现源网络地址转换网关水平扩展的方法
CN113542143B (zh) * 2020-04-14 2023-12-26 中国移动通信集团浙江有限公司 Cdn节点流量调度方法、装置、计算设备及计算机存储介质
CN113542143A (zh) * 2020-04-14 2021-10-22 中国移动通信集团浙江有限公司 Cdn节点流量调度方法、装置、计算设备及计算机存储介质
CN111756830A (zh) * 2020-06-22 2020-10-09 浪潮云信息技术股份公司 公有云网络的内网负载均衡实现方法
US11425030B2 (en) 2020-10-08 2022-08-23 Cisco Technology, Inc. Equal cost multi-path (ECMP) failover within an automated system (AS)
CN112653620A (zh) * 2020-12-21 2021-04-13 杭州迪普科技股份有限公司 路由处理方法、装置、设备及计算机可读存储介质
CN112653620B (zh) * 2020-12-21 2023-03-24 杭州迪普科技股份有限公司 路由处理方法、装置、设备及计算机可读存储介质
CN112817752A (zh) * 2021-01-21 2021-05-18 西安交通大学 一种分布式数据库动态负载均衡方法
CN112817752B (zh) * 2021-01-21 2023-12-19 西安交通大学 一种分布式数据库动态负载均衡方法
CN113377510B (zh) * 2021-06-08 2023-10-24 武汉理工大学 无服务器计算环境中基于一致性哈希的缓存包调度优化算法
CN113377510A (zh) * 2021-06-08 2021-09-10 武汉理工大学 无服务器计算环境中基于一致性哈希的缓存包调度优化算法
CN113709054A (zh) * 2021-07-16 2021-11-26 济南浪潮数据技术有限公司 一种基于keepalived的LVS系统部署调节方法、装置及系统
CN113691608A (zh) * 2021-08-20 2021-11-23 京东科技信息技术有限公司 流量分发的方法、装置、电子设备及介质
CN113691608B (zh) * 2021-08-20 2024-02-06 京东科技信息技术有限公司 流量分发的方法、装置、电子设备及介质
CN114268630A (zh) * 2021-12-14 2022-04-01 浪潮思科网络科技有限公司 基于静态arp表项实现随机负载均衡访问方法、装置及设备
CN114268630B (zh) * 2021-12-14 2024-04-12 浪潮思科网络科技有限公司 基于静态arp表项实现随机负载均衡访问方法、装置及设备
CN116155910A (zh) * 2023-03-29 2023-05-23 新华三技术有限公司 一种设备管理方法及装置
CN116155910B (zh) * 2023-03-29 2023-07-21 新华三技术有限公司 一种设备管理方法及装置

Also Published As

Publication number Publication date
CN111600806B (zh) 2023-04-18
CN107995123A (zh) 2018-05-04
CN107995123B (zh) 2020-05-01
CN111600806A (zh) 2020-08-28

Similar Documents

Publication Publication Date Title
WO2018077238A1 (fr) Système et procédé d'équilibrage de charge basé sur un commutateur
JP7417825B2 (ja) スライスベースルーティング
EP2845372B1 (fr) Distribution de paquets à deux niveaux avec distribution de paquets au premier niveau sans état à un groupe de serveurs et distribution de paquets au second niveau avec état à un serveur dans le groupe
US11381883B2 (en) Dynamic designated forwarder election per multicast stream for EVPN all-active homing
US8676980B2 (en) Distributed load balancer in a virtual machine environment
JP5661929B2 (ja) マルチシャーシリンクアグリゲーションのためのシステムおよび方法
JP6510115B2 (ja) 負荷分散を実現するための方法、装置、およびネットワークシステム
US9553809B2 (en) Asymmetric packet flow in a distributed load balancer
KR100680888B1 (ko) 상태 동기를 갖는 클러스터에 대한 버츄얼 멀티캐스트라우팅
US10237179B2 (en) Systems and methods of inter data center out-bound traffic management
US7849127B2 (en) Method and apparatus for a distributed control plane
WO2012116614A1 (fr) Procédé, nœud de réseau et système de distribution de volume de trafic de réseau
US11546267B2 (en) Method for determining designated forwarder (DF) of multicast flow, device, and system
WO2022253087A1 (fr) Procédé de transmission de données, nœud, gestionnaire de réseau et système
WO2012065440A1 (fr) Procédé et appareil de mise en oeuvre pour déterminer la priorité de dispositifs dans un groupe de sauvegarde d'un protocole de redondance de routeur virtuel
Azgin et al. On-demand mobility support with anchor chains in Information Centric Networks
WO2023274087A1 (fr) Procédé, appareil, et système de transfert de messages
WO2018040916A1 (fr) Procédé et dispositif de transfert de message
CN108737263B (zh) 数据中心系统及数据流处理方法
Matsuo et al. TE-Cast: Supporting general broadcast/multicast communications in virtual networks
US20220345326A1 (en) Selecting a rendezvous point in an ip multicast-capable network
Han et al. A Novel Multipath Load Balancing Algorithm in Fat-Tree Data Center
Zhang et al. Scalability and Bandwidth Optimization for Data Center Networks
Teixeira et al. UNIT: Multicast using unicast trees
Κωνσταντινίδης Experimental study of data center network load balancing mechanisms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864651

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864651

Country of ref document: EP

Kind code of ref document: A1