CN102255932B - Load-balancing method and load equalizer - Google Patents

Load-balancing method and load equalizer Download PDF

Info

Publication number
CN102255932B
CN102255932B CN201010184118.6A CN201010184118A CN102255932B CN 102255932 B CN102255932 B CN 102255932B CN 201010184118 A CN201010184118 A CN 201010184118A CN 102255932 B CN102255932 B CN 102255932B
Authority
CN
China
Prior art keywords
port
packet
going out
data flow
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201010184118.6A
Other languages
Chinese (zh)
Other versions
CN102255932A (en
Inventor
李闻
吴佳明
陈建
田燕
孙垚光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201010184118.6A priority Critical patent/CN102255932B/en
Publication of CN102255932A publication Critical patent/CN102255932A/en
Application granted granted Critical
Publication of CN102255932B publication Critical patent/CN102255932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of load-balancing method and load equalizer.Load-balancing method of the present invention uses conversational list to store client ip/port and empty IP/ port, the item storing rear end IP/ port is increased in described conversational list, for entering data flow, enter datastream source IP/ port if can not find in described conversational list and enter the entry that the IP/ port of data stream destination is index, then select real server, rear end IP and back-end ports, and in conversational list, set up an entry, this entry comprises empty IP/ port, client ip/port, real server ip/port and rear end IP/ port, then according to the respective entries in conversational list, be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet, namely twice NAT conversion is carried out, for going out, data flow carries out twice NAT conversion too.Therefore, technical scheme of the present invention is adopted to realize cross-network segment interconnected.

Description

Load-balancing method and load equalizer
Technical field
Present invention relates in general to computer network, particularly relate to a kind of load-balancing method and load equalizer.
Background technology
Along with the development of the technology such as computer, network is throughout each corner of people's life.Each core of current existing network is along with the raising of traffic carrying capacity, and visit capacity and data traffic are also able to quick growth, also correspondingly increase the demand of its disposal ability and computing capability, and this makes individual server cannot bear at all.
In order to solve this problem, a kind of method is thrown away existing equipment and carries out a large amount of HardwareUpgrings.Like this, the waste of existing resource can be caused on the one hand, if face again on the other hand when traffic carrying capacity promotes and will be difficult to again process, even if we know performance, remarkable again equipment can not meet the business demand increased without limitation, so just need again the great number cost carrying out a large amount of HardwareUpgrings to drop into when traffic carrying capacity promotes again, therefore, the method cost is very high, and along with traffic carrying capacity lifting, cost drops into and also continues to increase.Another kind method uses multiple servers jointly to share traffic carrying capacity, that is, multiple server.Multiple physical servers of rear end can be divided into groups, often organize server and support that certain is applied, and externally provide service for the empty IP/ port (v_ip:v_port) of this group Servers installed one, and each application server address stored in name server (DNS) is exactly this empty IP/ port, instead of real server address.When client wants access services device, packet can be sent by IP/ port for the purpose of v_ip:v_port, according to the object IP/ port in this packet, be select a real server in this group server of v_ip:v_port in address, then connection request issued this real server.In multiple server, select a real server, also namely between each server, carry out load balancing, its objective is the bandwidth of expansion existing network and server, increase throughput, Strengthens network data-handling capacity, the flexibility improving network and availability.
Current, network data flow is carried out to the method for load balancing between server, relatively more conventional is Layer 4 load balancing method and 7 layers of load-balancing method.
Introduce Layer 4 load balancing method under NAT (Network Address Translate, network address translation) pattern below, as shown in Figure 1, it comprises:
Step 1) with source IP/ port (c_ip:c_port) of the packet received from client and object IP/ port (v_ip:v_port) for index search conversational list (Session), wherein, described Session refers to the data structure for recording client's side link information, v_ip:v_port refers to empty IP/ port, and c_ip:c_port refers to client ip/port;
If found, forward step 4 to);
If do not found, carry out step 2) select a real server as destination server;
Step 3) in Session, set up an entry (v_ip:v_port/c_ip:c_port/r_ip:r_port), wherein, r_ip:r_port refers to real server ip/port;
Step 4) according to the IP of real server corresponding with v_ip:v_port/c_ip:c_port in Session and Service-Port (r_ip:r_port), be real server ip and Service-Port the object IP/ port modifications of described packet;
Step 5) calculate the School Affairs of this packet;
Step 6) packet is issued described real server.
Above-mentioned is to entering data flow, that is, the processing method of the data flow from client to server.
As shown in Figure 2, for data flow of going out, that is, from the process of the data flow of server to client end, by as follows:
Step 1 ') with source IP/ port (r_ip:r_port) of the packet sent from real server and object IP/ port (c_ip:c_port) for index search Session;
If do not found, abandon this packet, otherwise step 2 ') according to entry corresponding in Session, be empty IP and port (v_ip:v_port) the source IP/ port modifications of this packet;
Step 3 ') calculate the School Affairs of this packet;
Step 4 ') by described Packet Generation to client.
From the foregoing, object IP due to data flow of going out is client ip, so cannot in client configuration main frame or network segment route (this is because client ip address contains the IP address of each network segment in the Internet, setting main frame or network segment route need to specify specific IP address or network segment coupling source IP address, all client ips cannot be contained) by several routes, and can only with default route process (this is because default route does not need to specify specific IP address or network segment coupling source IP address, all clients can be contained) with a default route, so the default route of real server must point to load equalizer.Due to by default route, server ip address and load equalizer Backend IP Address need to be set to the identical network segment, and routing iinformation can only obtain according to 2 layer MAC address, thus must with load equalizer 2 layer intercommunication.
The data link layer of 2 layer intercommunications also namely in OSI network model is communicated with, this will cause all real servers all in a broadcast domain, if all real servers are not all in a broadcast domain, such as at VLAN (the Virtual LAN of different switch, VLAN) in, so need to beat VLAN Trunk (VLAN Trunk is the technology of main frame 2 layer intercommunication in being connected on different switch identical VLAN of allowing), and 3 layer intercommunications of multiple IP address by policy routing realizing and load equalizer will be bound on the same network interface card of the real server in rear end.This is complicated by what make the network topology structure of machine room and RS configure, thus causes difficult in maintenance.
Current solution to the problems described above is employing 7 layers of load-balancing technique.But the method have modified the IP address of client, the Backend IP Address of 7 layers of load equalizer is revised as by client ip address, make rear end RS can only see the Backend IP Address of 7 layers of load equalizer, owing to usually needing to analyze the behavior of client, and the behavior of client is analyzed carry out based on daily record, adopt the method, RS can't see client ip address at all, that is, in daily record, there is not the shadow of client ip address, therefore cause being difficult to carry out to the analysis of customer action.Other 7 layers of load balancing are client ip address is put into HTTP head X-Forwarded-For option for the solution of this problem, need the application program on the RS of rear end to resolve HTTP head, therefore need amendment application program and cause complexity to increase.
Summary of the invention
The main technical problem to be solved in the present invention is to provide and a kind ofly can realizes the interconnected load-balancing method of cross-network segment and load equalizer.
In order to solve the problem, the technical scheme of load-balancing method of the present invention is:
It uses conversational list to store client ip/port and empty IP/ port, and in described conversational list, add the item storing rear end IP/ port, comprises for entering Data Stream Processing step:
Step (10), enters entering datastream source IP/ port and entering the IP/ port of data stream destination for conversational list described in index search of packet with what receive from client in this step (10);
If do not found, perform step (20), in this step (20), select a real server as destination server; Otherwise perform step (50);
After step (20), perform step (30), in this step (30), select rear end IP and back-end ports;
Step (40), in this step (40), set up an entry according to selected destination server, rear end IP and back-end ports in conversational list, described entry comprises empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Step (50), according to the respective entries in conversational list in this step (50), be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet;
Step (60), enters the School Affairs of packet in this step (60) described in calculating;
Step (70), issues described real server by the packet that enters of calculated School Affairs in this step (70).
For going out, Data Stream Processing step comprises:
Step (10 '), in this step (10 ') with the IP/ port of go out datastream source IP/ port and the data stream destination of going out of the packet of going out received from described real server for conversational list described in index search;
If do not found, packet of going out described in abandoning; Otherwise perform step (20 '), according to the respective entries in described conversational list in this step (20 '), the datastream source IP/ port modifications of going out of described packet of going out is empty IP/ port and is client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out;
Step (30 '), the School Affairs of packet of going out described in calculating in this step (30 ');
Step (40 '), in this step (40 ') by calculated School Affairs go out Packet Generation give described client.
Wherein, described step (20) comprises further:
Step (201), is got up the IP address of all real servers and port and present load by list organization in this step (201);
Step (202), selects described real server ip address and port in turn by employing polling algorithm in this step (202) in described list.
In addition, described step (30) comprises further:
Step (301), adopts polling algorithm to select a rear end IP in step (301);
Step (302), adopts polling algorithm to select a back-end ports in step (302);
Step (303), searches selected rear end IP/ port in step (303) in described conversational list, if find, then forwards step (302) to.
Preferably, also comprise after described step (50):
Step (51), is added on the client ip/port in described respective entries in datagram TCP head as new tcp option entry in step (51).
Described School Affairs comprises IP header checksum and TCP header checksum.
Described conversational list also comprises traffic statistics, spin lock and flag bit.
Correspondingly, the technical scheme of load equalizer of the present invention comprises:
Store the conversational list of client ip/port and empty IP/ port, described conversational list also comprises the item storing rear end IP/ port, and described load equalizer also comprises following for the treatment of the unit entering data flow:
Enter data flow and search unit, enter entering datastream source IP/ port and entering the IP/ port of data stream destination for conversational list described in index search of packet with what receive from client;
Select real server unit, for selecting a real server as destination server;
Select rear end IP and back-end ports unit, for selecting rear end IP and back-end ports;
Set up entry elements, for setting up an entry according to selected destination server, rear end IP and back-end ports in conversational list, described entry comprises empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Enter data flow amendment unit, for according to the respective entries in conversational list, be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet;
Enter data flow verification unit, for entering the School Affairs of packet described in calculating;
Enter data flow unit, for the packet that enters of calculated School Affairs is issued described real server; Wherein,
If described in enter the result that data flow searches unit be no, then trigger the real server unit of described selection, otherwise described in triggering, enter data flow amendment unit;
The real server unit of described selection connects described selection rear end IP and back-end ports unit, itself so that connect and describedly set up entry elements;
Described set up entry elements connect described in enter data flow amendment unit, itself so connect described in enter data flow verification unit after connect described in enter data flow unit.
In addition, load equalizer of the present invention also comprises lower column processing and to go out the unit of data flow:
Data flow of going out searches unit, for the IP/ port of go out datastream source IP/ port and the data stream destination of going out of the packet of going out received from described real server for conversational list described in index search;
Data flow of going out amendment unit, for according to the respective entries in described conversational list, the datastream source IP/ port modifications of going out of described packet of going out is empty IP/ port and is client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out;
Data flow of going out verification unit, for packet School Affairs of going out described in calculating;
To go out data flow unit, for by described Packet Generation of going out to client; Wherein,
If described in the result that data flow searches unit of going out be no, then abandon described packet, otherwise data flow amendment unit of going out described in triggering;
To go out data flow verification unit described in described data flow amendment unit of going out connects, itself and then data flow unit of going out described in connecting.
In addition, load equalizer of the present invention also comprises tcp option adding device, for being added in datagram TCP head as new tcp option entry by the client ip/port in described respective entries.
Compared with prior art, the beneficial effect of load-balancing method of the present invention and load equalizer is:
First, owing to present invention employs twice NAT conversion, i.e. SNAT and DNAT, achieve cross-network segment interconnected, thus do not need to adopt 7 layers of expensive load-balancing device, throw away existing equipment and carry out a large amount of HardwareUpgring, also do not need to design complicated network topology and just can improve network data flow disposal ability.
Secondly, because the present invention is added on datagram TCP head using client ip/port as new tcp option entry, so application program does not need amendment just can obtain client ip address, be convenient to the migration of large-scale application program.
Accompanying drawing explanation
Below with reference to following description carried out by reference to the accompanying drawings more thoroughly to understand present disclosure, in the accompanying drawings:
Fig. 1 is prior art load-balancing method to the flow chart of process entering data flow;
Fig. 2 is the flow chart of prior art load-balancing method to the process of data flow of going out;
Fig. 3 is load-balancing method of the present invention to the flow chart of process entering data flow;
Fig. 4 is the flow chart of load-balancing method of the present invention to the process of data flow of going out;
Fig. 5 is load equalizer of the present invention to the structural representation entering data flow and process;
Fig. 6 is the structural representation that load equalizer of the present invention processes data flow of going out;
Fig. 7 is the schematic diagram of the example comprising two load equalizers.
Embodiment
Specific embodiments of the invention will be described in detail below, but the present invention is not limited to following specific embodiment.
As shown in Figure 3, load-balancing method of the present invention, uses conversational list to store client ip/port and empty IP/ port, increases the item storing rear end IP/ port, such as, shown in following table 1 in described conversational list:
c_ip:c_port v_ip:v_port b_ip:b_port r_ip:r_port
Table 1
As can be seen from this conversational list, it comprises (v_ip:v_port/c_ip:c_port/r_ip:r_port/b_ip:b_port), wherein, v_ip:v_port refers to empty IP/ port, c_ip:c_port refers to client ip/port, r_ip:r_port refers to real server ip/port, and b_ip:b_port refers to rear end IP/ port.
Comprise for entering Data Stream Processing step:
Step 10) enter entering datastream source IP/ port and entering the IP/ port of data stream destination for conversational list described in index search of packet with what receive from client;
If do not found, perform step 20) select a real server as destination server; Otherwise perform step 50);
Step 30) select rear end IP and back-end ports;
Step 40) in conversational list, set up an entry according to selected destination server, rear end IP and back-end ports, described entry comprises empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Step 50) according to the respective entries in conversational list, be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet;
Step 60) calculate the School Affairs that this enters packet;
Step 70) packet that enters of calculated School Affairs is issued described real server.
As shown in Figure 4, Data Stream Processing step of going out is comprised:
Step 10 ') with go out datastream source IP/ port and the IP/ port of data stream destination of going out of the packet of going out sent from described real server for conversational list described in index search;
If do not found, to go out described in abandoning packet, otherwise step 20 ') according to entry corresponding in described conversational list, the datastream source IP/ port modifications of going out of described packet of going out is empty IP/ port and is client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out;
Step 30 ') calculate the School Affairs of this packet of going out;
Step 40 ') by the Packet Generation of going out of calculated School Affairs to client.
From the foregoing, load-balancing method of the present invention adds one in Session table, be used for storing rear end IP/ port, for entering for data flow, show for index search Session with enter datastream source IP/ port and the IP/ port that enters data stream destination that enter packet that receive from client, wherein, this enters datastream source IP/ port and refers to client ip/port (c_ip:c_port), this IP/ port entering data stream destination refers to empty IP/ port (v_ip:v_port), if have found respective entries in Session table, then show that this connection exists, then directly carry out twice NAT (Network Address Translate according to this respective entries, network address translation) change, otherwise, namely do not find respective entries at Session, then for described in enter packet and select a real server as destination server, and then select rear end IP and back-end ports, and by (client ip/port, empty IP/ port, real IP/ port, rear end IP/ port) be stored in Session table as an entry.Then, according to this entry in Session table, be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet, namely a SNAT (Source Network Address Translate has been carried out, source network address is changed), but also be real IP/ port the described IP/ port modifications entering data stream destination entering packet, namely a DNAT (Destination Network Address Translate has been carried out, object network address translation), carry out twice NAT like this.Now, the datastream source IP/ port that enters entering packet is rear end IP/ port, the IP/ port entering data stream destination is real IP/ port, for the real server in rear end, it thinks that this enters packet and sends over from rear end IP/ port, it can't see client, but client ip/port not finds nowhere but is present in Session table.The present invention can also add this IP/ port information in tcp option to, thus makes the application program on the real server in rear end see client-side information.
For data flow of going out, first, with the IP/ port of go out datastream source IP/ port and the data stream destination of going out of the packet of going out sent from described real server for conversational list described in index search, the datastream source IP/ port of going out of the packet of going out that wherein should send from real server is real IP/ port, and the IP/ port of data stream destination of going out is rear end IP/ port, if do not found, to go out described in abandoning packet (fail safe can be improved to a certain extent like this), otherwise according to entry corresponding in described conversational list, the datastream source IP/ port modifications of going out of described packet of going out is empty IP/ port and the IP/ port modifications of the data stream destination of going out of described packet of going out be client ip/port (such as, source IP address 4 byte datas in the IP head of described packet of going out are replaced with v_ip, in TCP head, source port 2 byte datas replace with v_port, and 4, object IP address byte data in the IP head of described packet of going out is replaced with c_ip, destination interface 2 byte datas are revised as c_port), this has carried out SNAT+DNAT, thus for client, it thinks the packet issuing it from empty IP/ port.
Further, described step 20) comprising:
Step 201) the IP address of all real servers and port and present load are got up by list organization;
Step 202) adopt polling algorithm to select described real server ip address and port in turn in described list.
In addition, described step 30) comprise further:
Step 301) adopt polling algorithm to select a rear end IP;
Step 302) adopt polling algorithm to select a back-end ports;
Step 303) in described conversational list, search selected rear end IP/ port, if find, then forward step 302 to).
Above-mentioned steps is the problem in order to avoid back-end ports conflict, is also whether the back-end ports selected by verification is occupied, if occupied, reselects.
In addition, more as shown in Figure 3, in described step 50) after also comprise:
Step 51) client ip/port in described respective entries is added in datagram TCP head as new tcp option entry.
Can comprise IP header checksum and TCP header checksum for School Affairs, and its computational methods refer to standard RFC 791 and RFC 793.
In addition, described conversational list can also comprise traffic statistics, spin lock and flag bit etc., and wherein said traffic statistics are used for client behavioural analysis and access control; Described spin lock and flag bit are used for the maintenance of conversational list.
In addition, the client ip address in described tcp option and port can be parsed from packet at real server end.
Correspondingly, the invention also discloses a kind of load equalizer, comprise the conversational list of storage client ip/port and empty IP/ port, described conversational list also comprises the item storing rear end IP/ port, described load equalizer also comprises following for the treatment of the unit entering data flow, as shown in Figure 5:
Enter data flow and search unit 1, enter entering datastream source IP/ port and entering the IP/ port of data stream destination for conversational list described in index search of packet with what receive from client;
Select real server unit 2, for selecting a real server as destination server;
Select rear end IP and back-end ports unit 3, for selecting rear end IP and back-end ports;
Set up entry elements 4, for setting up an entry (v_ip:v_port/c_ip:c_port/r_ip:r_port/b_ip:b_port) according to selected destination server, rear end IP and back-end ports in conversational list, wherein, v_ip:v_port refers to empty IP/ port, c_ip:c_port refers to client ip/port, r_ip:r_port refers to real server ip/port, and b_ip:b_port refers to rear end IP/ port;
Enter data flow amendment unit 5, for according to the respective entries in conversational list, be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet;
Enter data flow verification unit 6, for calculating the School Affairs that this enters packet;
Enter data flow unit 7, for the packet that enters of calculated School Affairs is issued described real server; Wherein,
If described in enter the result that data flow searches unit 1 be no, then trigger the real server unit 2 of described selection, otherwise described in triggering, enter data flow amendment unit 5;
The real server unit of described selection 2 connects described selection rear end IP and back-end ports unit 3, itself so that connect and describedly set up entry elements 4;
Described set up entry elements 4 connect described in enter data flow amendment unit 1, itself so enter described in connecting to connect after data flow verification unit 6 described in enter data flow unit 7.
From the foregoing, load equalizer of the present invention enter data flow search unit 1 with receive from client enter packet enter datastream source IP/ port and enter the IP/ port of data stream destination for conversational list described in index search, wherein this enters datastream source IP/ port is client ip/port, and this IP/ port entering data stream destination is empty IP/ port.If found, described in triggering, enter data flow amendment unit 5, be used for carrying out twice NAT conversion according to the respective entries found in conversational list, if do not found, the real server unit 2 of triggering selection, the real server unit of described selection 2 selects a real server as destination server, then rear end IP and back-end ports is selected by selection rear end IP and back-end ports unit 3, after determining rear end IP/ port, in conversational list, an entry (v_ip:v_port/c_ip:c_port/r_ip:r_port/b_ip:b_port) is set up by setting up entry elements 4, the described data flow amendment unit 5 that enters is revised as real server ip/port the described IP/ port (being now empty IP/ port) entering data stream destination entering packet, and the described datastream source IP/ port (being now client ip/port) that enters entering packet is revised as rear end IP/ port, namely twice NAT (SNAT+DNAT) conversion has been carried out, at this moment, for real server, it thinks that this enters packet and sends over from rear end IP/ port.Then by enter data flow verification unit 6 calculation check and after this entered packet issue described real server by entering data flow unit 7.
As shown in Figure 6, load equalizer of the present invention also comprises lower column processing and to go out the unit of data flow:
Data flow of going out searches unit 8, for the IP/ port of go out datastream source IP/ port and the data stream destination of going out of the packet of going out received from described real server for conversational list described in index search;
Data flow of going out amendment unit 9, for according to entry corresponding in described conversational list, is empty IP/ port the source IP/ port modifications of described packet of going out and is client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out;
Data flow of going out verification unit 10, for calculating the School Affairs of this packet of going out;
To go out data flow unit 11, for by the Packet Generation of going out of calculated School Affairs to client; Wherein,
If described in the result that data flow searches unit 8 of going out be no, then packet of going out described in abandoning, otherwise data flow amendment unit 9 of going out described in triggering;
To go out data flow verification unit 10 described in described data flow amendment unit 9 of going out connects, itself and then data flow unit 11 of going out described in connecting.
From the foregoing, for data flow of going out, search unit 8 with the IP/ port (b_ip:b_port) of the datastream source IP/ port of going out (now for r_ip:r_port) of the packet of going out received from described real server and data stream destination of going out for conversational list described in index search by described data flow of going out; If do not found, abandon this packet of going out, otherwise described in go out data flow amendment unit 9 according to the respective entries in found conversational list, the datastream source IP/ port modifications of going out of described packet of going out is empty IP/ port and is client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out, namely carry out twice NAT (SNAT+DNAT) conversion; Now for client, the packet returned sends over from the destination end (empty IP/ port) required by it.Then, data flow of going out verification unit 10 calculates the School Affairs of this packet of going out, finally by data flow unit 11 of going out by described Packet Generation of going out to client.
Again as shown in Figure 6, load equalizer of the present invention also comprises tcp option adding device 12, for being added in datagram TCP head as new tcp option entry by the client ip/port in described respective entries.
Correspondingly, can insert tcp option resolution unit at real server end, this TCT option resolution unit is used for the client ip address in described tcp option and port to parse from packet, gives the application program on real server.
Example is deployed as to describe technical scheme of the present invention below with two load equalizers.
As shown in Figure 7, comprise two load equalizers (LB_A and LB_B), virtual address VIP 210.77.19.23 is safeguarded by heartbeat (VRRP agreement), the rear end network interface card of these two LB is connected to the different inside network segment 10.13.65.x/24 and 10.13.66.x/24, the interface IP of configuration is 10.13.65.1 and 10.13.66.1, and these two LB are configured with rear end address pool 10.13.65.128 ~ 10.13.65.254 and 10.13.66.128 ~ 10.13.66.254.The real server (RS) of rear end, be in redundancy also referred to as Web server to consider, all be configured with two pieces of network interface cards, the interface IP (being also simultaneously service IP) of two pieces of network interface cards is in 10.13.65.X/24 and 10.13.65.x/24 two network segments respectively, the interface IP address of 4 shown in the figure real servers is 10.13.65.2 ~ 10.13.65.5 and 10.13.66.2 ~ 10.13.66.5 (4 station servers, in order to redundant configuration double netcard) respectively.Suppose LB_A in running order (VIP is on LB_A) in this example, its serve port is 80, and the serve port of the real server in rear end is 8080.
In figure, Switch refers to the network equipment realizing 2 layers and 3 layers interconnecting function, and the effect in network topology is coupled together and realize intercommunication server, LB.
It is as shown in table 2,
c_ip:c_port v_ip:v_port b_ip:b_port r_ip:r_port Traffic statistics Other
66.249.89.105:236 210.77.19.23:80 10.13.65.128:2000 10.13.65.2.8080 _ _
Table 2
The Session table that this load equalizer has at least comprises four item client ip/ports, empty IP/ port, rear end IP/ port, real server ip/port.Also comprise in addition traffic statistics and other.An entry is only shown in this table 2, certainly, multiple entry can be had.
As follows for entering Data Stream Processing:
First, have received the packet sent from client, the source IP/ port of this packet is 66.249.89.105:236, object IP/ port is 210.77.19.23:80.If initially connect, so this packet is the connection request bag indicated with SYN (Synchronize is synchronous); If general data bag, then indicate with ACK (Acknowledge response); If connection closed request, then indicate with FIN (Finish completes).Be set to initial connection in this official holiday, namely this packet indicates with SYN.
Suppose, now Session table is for empty, so with 210.77.19.23:80 and 66.249.89.105:236 for so search this Session and show.Search less than this entry for this example, then use RR (Round Robin, polling schemas) algorithms selection real server as destination server, other selection strategy can certainly be used select, suppose to have selected real server RS1,10.13.65.2:8080.Then RR algorithm can also be used from IP pond, rear end to select a rear end IP, suppose that the Backend IP Address selected is 10.13.65.128.After selecting rear end IP, select back-end ports, in order to minimum probability avoid port collision (occupied), now still use RR algorithm to select successively, if having selected port 2000.If search Session table, can find the entry of 10.13.65.128:2000 and 10.13.65.2:8080, then illustrate that back-end ports is occupied, need to change port, that just needs to reselect back-end ports.Suppose not find in Session table in this example, that is port 2000 can use.
Next, in Session table, insert entry (210.77.19.23:80,66.249.89.105:236,10.13.65.2:8080,10.13.65.128:2000), as shown in table 2.Then according to the entry in Session table, being service IP and the serve port of RS1 the object IP/ port modifications of this packet, is rear end IP/ port source IP/ port modifications.In this example the object IP/ port 210.77.19.23:80 of this packet is revised as 10.13.65.2:8080, IP/ address, source 66.249.89.105:236 is revised as 10.13.65.128:2000.
Below calculation check and, comprise IP header checksum and TCP header checksum.
Finally give selected real server RS1 this Packet Generation.This real server RS1 have received the connection request packet with SYN mark, and then this real server RS1 will return the reply data bag indicated with SYN/ACK.
For data flow of going out, load equalizer process of the present invention is as follows:
Suppose, real server RS1 have issued reply data bag, and the source IP/ port of this packet is 10.13.65.2:8080, and the object IP/ port of this packet is 10.13.65.128:2000.With 10.13.65.2:8080 and 10.13.65.128:2000 for index search Session shows, if can not find, abandon this packet.Have found corresponding entry in this example, as shown in table 2.Then, according to found entry, being the empty IP/ port of virtual server the source IP/ port modifications of this packet, is client ip/port object IP/ port modifications.Being 210.77.19.23:80 by source IP/ port modifications in this example, is 66.249.89.105:236 object IP/ port modifications.Then calculation check and, comprise IP header checksum and TCP header checksum.Finally this Packet Generation to client.
In sum, because the present invention is by have employed twice NAT conversion, i.e. SNAT and DNAT, achieves Layer 4 load balancing, thus does not need to throw away existing equipment and carry out a large amount of HardwareUpgring and just can improve network data flow disposal ability.
Secondly, because the present invention is added on datagram tcp option using client ip/port as new tcp option entry, it can be resolved by the ICP/IP protocol stack on the RS of rear end, avoids the complexity of amendment application program, and can support the application layer protocol except HTTP.
In addition, the present invention utilizes tcp option to carry client ip/port, makes the application program that real server runs after twice NAT, still can obtain the link information of client without amendment.
Although describe specific embodiments of the invention by reference to the accompanying drawings above-mentioned, those skilled in the art without departing from the spirit and scope of the present invention, can carry out various change, amendment and equivalent substitution to the present invention.These change, amendment and equivalent substitution all mean and fall within spirit and scope that the claim of enclosing limits.

Claims (10)

1. a load-balancing method, uses conversational list to store client ip/port and empty IP/ port, it is characterized in that, increasing the item storing rear end IP/ port, comprising for entering Data Stream Processing step in described conversational list:
Step 10, enters entering datastream source IP/ port and entering the IP/ port of data stream destination for conversational list described in index search of packet with what receive from client in this step 10;
If do not found, perform step 20, in this step 20, select a real server as destination server; Otherwise perform step 50;
Perform step 30 after step 20, in this step 30, select rear end IP and back-end ports;
Step 40, sets up an entry according to selected destination server, rear end IP and back-end ports in this step 40 in conversational list, and described entry comprises empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Step 50, according to the respective entries in conversational list in this step 50, be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet;
Step 60, enters the School Affairs of packet in this step 60 described in calculating;
Step 70, issues described real server by the packet that enters of calculated School Affairs in this step 70.
2. load-balancing method as claimed in claim 1, it is characterized in that, for going out, Data Stream Processing step comprises:
In step 10 ', in this step 10 ' with the IP/ port of go out datastream source IP/ port and the data stream destination of going out of the packet of going out received from described real server for conversational list described in index search;
If do not found, packet of going out described in abandoning; Otherwise perform step 20 ', in this step 20 ' according to the respective entries in described conversational list, the datastream source IP/ port modifications of going out of described packet of going out be empty IP/ port and be client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out;
To go out described in calculating in step 30 ', in this step 30 ' School Affairs of packet;
In step 40 ', in this step 40 ' by the Packet Generation of going out of calculated School Affairs to described client.
3. load-balancing method as claimed in claim 2, it is characterized in that, described step 20 comprises further:
Step 201, is got up the IP address of all real servers and port and present load by list organization in this step 201;
Step 202, selects described real server ip address and port in turn by employing polling algorithm in this step 202 in described list.
4. load-balancing method as claimed in claim 3, it is characterized in that, described step 30 comprises further:
Step 301, adopts polling algorithm to select a rear end IP in step 301;
Step 302, adopts polling algorithm to select a back-end ports in step 302;
Step 303, searches selected rear end IP/ port in step 303, if find, then forwards step 302 in described conversational list.
5. the load-balancing method as described in any one of claims 1 to 3, is characterized in that, also comprises after described step 50:
Step 51, is added on the client ip/port in described respective entries in datagram TCP head as new tcp option entry in step 51.
6. load-balancing method as claimed in claim 5, it is characterized in that, described School Affairs comprises IP header checksum and TCP header checksum.
7. load-balancing method as claimed in claim 6, it is characterized in that, described conversational list also comprises traffic statistics, spin lock and flag bit.
8. a load equalizer, comprises the conversational list of storage client ip/port and empty IP/ port, it is characterized in that, described conversational list also comprises the item storing rear end IP/ port, and described load equalizer also comprises following for the treatment of the unit entering data flow:
Enter data flow and search unit, enter entering datastream source IP/ port and entering the IP/ port of data stream destination for conversational list described in index search of packet with what receive from client;
Select real server unit, for selecting a real server as destination server;
Select rear end IP and back-end ports unit, for selecting rear end IP and back-end ports;
Set up entry elements, for setting up an entry according to selected destination server, rear end IP and back-end ports in conversational list, described entry comprises empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Enter data flow amendment unit, for according to the respective entries in conversational list, be real server ip/port the described IP/ port modifications entering data stream destination entering packet, and be rear end IP/ port the described datastream source IP/ port modifications that enters entering packet;
Enter data flow verification unit, for entering the School Affairs of packet described in calculating;
Enter data flow unit, for the packet that enters of calculated School Affairs is issued described real server; Wherein,
If described in enter the result that data flow searches unit be no, then trigger the real server unit of described selection, otherwise described in triggering, enter data flow amendment unit;
The real server unit of described selection connects described selection rear end IP and back-end ports unit, itself so that connect and describedly set up entry elements;
Described set up entry elements connect described in enter data flow amendment unit, itself so connect described in enter data flow verification unit after connect described in enter data flow unit.
9. load equalizer as claimed in claim 8, is characterized in that, also comprises lower column processing and to go out the unit of data flow:
Data flow of going out searches unit, for the IP/ port of go out datastream source IP/ port and the data stream destination of going out of the packet of going out received from described real server for conversational list described in index search;
Data flow of going out amendment unit, for according to the respective entries in described conversational list, the datastream source IP/ port modifications of going out of described packet of going out is empty IP/ port and is client ip/port the IP/ port modifications of the data stream destination of going out of described packet of going out;
Data flow of going out verification unit, for packet School Affairs of going out described in calculating;
To go out data flow unit, for by the Packet Generation of going out of calculated School Affairs to client; Wherein,
If described in the result that data flow searches unit of going out be no, then abandon described packet, otherwise data flow amendment unit of going out described in triggering;
To go out data flow verification unit described in described data flow amendment unit of going out connects, itself and then data flow unit of going out described in connecting.
10. load equalizer as claimed in claim 9, is characterized in that, also comprise tcp option adding device, for being added in datagram TCP head as new tcp option entry by the client ip/port in described respective entries.
CN201010184118.6A 2010-05-20 2010-05-20 Load-balancing method and load equalizer Active CN102255932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010184118.6A CN102255932B (en) 2010-05-20 2010-05-20 Load-balancing method and load equalizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010184118.6A CN102255932B (en) 2010-05-20 2010-05-20 Load-balancing method and load equalizer

Publications (2)

Publication Number Publication Date
CN102255932A CN102255932A (en) 2011-11-23
CN102255932B true CN102255932B (en) 2015-09-09

Family

ID=44982926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010184118.6A Active CN102255932B (en) 2010-05-20 2010-05-20 Load-balancing method and load equalizer

Country Status (1)

Country Link
CN (1) CN102255932B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103023942B (en) * 2011-09-27 2016-08-03 北京奇虎科技有限公司 A kind of server load balancing method, Apparatus and system
CN103297552B (en) * 2012-03-02 2016-05-25 百度在线网络技术(北京)有限公司 Transmit client ip v4 address and port method and the device to back-end server
CN103297407B (en) * 2012-03-02 2016-05-25 百度在线网络技术(北京)有限公司 Transmit client ip v6 address and port method and the device to back-end server
CN103368841B (en) * 2012-03-29 2016-08-17 深圳市腾讯计算机系统有限公司 Message forwarding method and device
CN103491016B (en) * 2012-06-08 2017-11-17 百度在线网络技术(北京)有限公司 Source address transmission method, system and device in UDP SiteServer LBSs
CN103491053A (en) * 2012-06-08 2014-01-01 北京百度网讯科技有限公司 UDP load balancing method, UDP load balancing system and UDP load balancing device
CN103491065B (en) * 2012-06-14 2018-08-14 南京中兴软件有限责任公司 A kind of Transparent Proxy and its implementation
CN107786669B (en) * 2017-11-10 2021-06-22 华为技术有限公司 Load balancing processing method, server, device and storage medium
CN108156040A (en) * 2018-01-30 2018-06-12 北京交通大学 A kind of central control node in distribution cloud storage system
CN108769291A (en) * 2018-06-22 2018-11-06 北京云枢网络科技有限公司 A kind of message processing method, device and electronic equipment
CN109729104B (en) * 2019-03-19 2021-08-17 北京百度网讯科技有限公司 Client source address acquisition method, device, server and computer readable medium
CN110166570B (en) * 2019-06-04 2022-06-28 杭州迪普科技股份有限公司 Service session management method and device, and electronic device
CN113923202B (en) * 2021-10-18 2023-10-13 成都安恒信息技术有限公司 Load balancing method based on HTTP cluster server
CN115118638A (en) * 2022-06-29 2022-09-27 济南浪潮数据技术有限公司 Method, device and medium for monitoring back-end network card

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040268358A1 (en) * 2003-06-30 2004-12-30 Microsoft Corporation Network load balancing with host status information
CN101018206A (en) * 2007-02-14 2007-08-15 华为技术有限公司 Packet message processing method and device
CN101136929A (en) * 2007-10-19 2008-03-05 杭州华三通信技术有限公司 Internet small computer system interface data transmission method and apparatus
CN101136851A (en) * 2007-09-29 2008-03-05 华为技术有限公司 Stream forwarding method and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040268358A1 (en) * 2003-06-30 2004-12-30 Microsoft Corporation Network load balancing with host status information
CN101018206A (en) * 2007-02-14 2007-08-15 华为技术有限公司 Packet message processing method and device
CN101136851A (en) * 2007-09-29 2008-03-05 华为技术有限公司 Stream forwarding method and equipment
CN101136929A (en) * 2007-10-19 2008-03-05 杭州华三通信技术有限公司 Internet small computer system interface data transmission method and apparatus

Also Published As

Publication number Publication date
CN102255932A (en) 2011-11-23

Similar Documents

Publication Publication Date Title
CN102255932B (en) Load-balancing method and load equalizer
US10547544B2 (en) Network fabric overlay
US20220393974A1 (en) Packet Processing System and Method, Machine-Readable Storage Medium, and Program Product
US9160701B2 (en) Addressing method, addressing apparatus, fabric manager, switch, and data routing method
US7363347B2 (en) Method and system for reestablishing connection information on a switch connected to plural servers in a computer network
US20180343228A1 (en) Packet Generation Method Based on Server Cluster and Load Balancer
US7290059B2 (en) Apparatus and method for scalable server load balancing
US7051115B2 (en) Method and apparatus for providing a single system image in a clustered environment
US20130332584A1 (en) Load balancing methods and devices
US8560660B2 (en) Methods and apparatus for managing next hop identifiers in a distributed switch fabric system
CN111600806A (en) Load balancing method and device, front-end scheduling server, storage medium and equipment
CN104618243B (en) Method for routing, apparatus and system, Scheduling of Gateway method and device
CN104734955A (en) Network function virtualization implementation method, wide-band network gateway and control device
CN109547354B (en) Load balancing method, device, system, core layer switch and storage medium
CN104486402A (en) Combined equalizing method based on large-scale website
JP5861772B2 (en) Network appliance redundancy system, control device, network appliance redundancy method and program
WO2012149857A1 (en) Routing method for data center network system
CN102970242A (en) Method for achieving load balancing
CN101827039A (en) Method and equipment for load sharing
US8923277B1 (en) Methods and apparatus related to flexible physical interface naming in a distributed switch fabric system
CN103297354B (en) Server interlinkage system, server and data forwarding method
CN104734930B (en) Method and device for realizing access of Virtual Local Area Network (VLAN) to Variable Frequency (VF) network and Fiber Channel Frequency (FCF)
JP5437290B2 (en) Service distribution method, service distribution device, and program
CN106790502B (en) Load balancing system of IPv4 terminal and IPv6 service intercommunication service based on NAT64 prefix
Chiueh et al. Peregrine: An all-layer-2 container computer network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant