CN102255932A - Load balancing method and load equalizer - Google Patents
Load balancing method and load equalizer Download PDFInfo
- Publication number
- CN102255932A CN102255932A CN2010101841186A CN201010184118A CN102255932A CN 102255932 A CN102255932 A CN 102255932A CN 2010101841186 A CN2010101841186 A CN 2010101841186A CN 201010184118 A CN201010184118 A CN 201010184118A CN 102255932 A CN102255932 A CN 102255932A
- Authority
- CN
- China
- Prior art keywords
- port
- data flow
- going out
- packet
- rear end
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a load balancing method and a load equalizer. According to the load balancing method of the present invention, a session is used to store a client IP/port and a virtual IP/port, and back end IP/port items are added and stored in the session. As for entered data flow, a process method is characterized in that: if entry with entered data flow source IP/port and entered data flow target IP/port as index can not be found in the session, selecting a real server, back end IP and back end port, and establishing an entry in the session, wherein the entry comprises virtual IP/port, client IP/port, real server IP/port and back end IP/port; according to a corresponding entry in the session, modifying the entered data flow target IP/port of entered data packet as the real server IP/port, modifying the entered data flow source IP/port of the entered data packet as the back end IP/port, that is carrying out NAT conversion twice; as for exit data flow carrying out NAT conversion twice in the same way. Accordingly, by employing a technical scheme of the invention, cross network interconnection is realized.
Description
Technical field
Present invention relates in general to computer network, relate in particular to a kind of load-balancing method and load equalizer.
Background technology
Along with the development of technology such as computer, network has spreaded all over each corner of people's life.Each core of existing network is along with the raising of traffic carrying capacity at present, and visit capacity and data traffic also are able to quick growth, and to the also correspondingly increase of demand of its disposal ability and computing capability, this makes individual server can't bear at all.
In order to address this problem, a kind of method is to throw away existing equipment and carry out a large amount of HardwareUpgrings.Like this, can cause the waste of existing resource on the one hand, to be difficult to again handle if face again on the other hand when traffic carrying capacity promotes, remarkable again equipment can not satisfy the business demand that increases without limitation even we know performance, so just need to carry out the great number cost input of a large amount of HardwareUpgrings again when traffic carrying capacity promotes again, therefore, this method cost is very high, and along with traffic carrying capacity promotes, the cost input also continues to increase.Another kind method is to use multiple servers to share traffic carrying capacity jointly, that is, and and a plurality of servers.A plurality of physical servers of rear end can be divided into groups, every group of server supported certain application, and, this group server externally provides service for being provided with an empty IP/ port (v_ip:v_port), and each application server address of storage is exactly should void IP/ port in name server (DNS), rather than real server address.When client is wanted access server, can be that purpose IP/ port sends packet with v_ip:v_port, according to the purpose IP/ port in this packet, be to select a real server in this group server of v_ip:v_port in the address, then connection request is issued this reality server.In a plurality of servers, select a real server, also promptly between each server, carry out load balancing, its objective is bandwidth, increase throughput, strengthen network data-handling capacity, raising network more flexible and the availability of expansion existing network and server.
Current, between server, network data flow is carried out the method for load balancing, relatively more commonly used is Layer 4 load balancing method and 7 layers of load-balancing method.
Be presented in Layer 4 load balancing method under NAT (Network Address Translate, the network address translation) pattern below, as shown in Figure 1, it comprises:
Step 1) is index search conversational list (Session) with the source IP/ port (c_ip:c_port) and the purpose IP/ port (v_ip:v_port) of the packet that receives from client, wherein, described Session refers to be used to write down the data structure of client link information, v_ip:v_port refers to empty IP/ port, and c_ip:c_port refers to client ip/port;
If find, forward step 4) to;
If do not find, carry out step 2) select a real server as destination server;
Step 3) is set up clauses and subclauses (v_ip:v_port/c_ip:c_port/r_ip:r_port) in Session, wherein, r_ip:r_port refers to real server ip/port;
Step 4) is the purpose IP/ port modifications of described packet real server ip and Service-Port according to the IP and the Service-Port (r_ip:r_port) of real server corresponding with v_ip:v_port/c_ip:c_port among the Session;
Step 5) calculate this packet verification and;
Step 6) is issued described real server with packet.
Above-mentioned is to entering data flow, that is, and and the processing method of stream from the client to the data in server.
As shown in Figure 2, for the data flow of going out, that is, and from the processing of the data flow of server to client end, by as follows:
Step 1 ') be index search Session with the source IP/ port (r_ip:r_port) and the purpose IP/ port (c_ip:c_port) of the packet that sends from real server;
If do not find, abandon this packet, otherwise step 2 ') according to corresponding clauses and subclauses among the Session, be the source IP/ port modifications of this packet empty IP and port (v_ip:v_port);
Step 3 ') calculate this packet verification and;
Step 4 ') described packet is sent to client.
From as can be known above-mentioned, the purpose IP of data flow is a client ip owing to go out, so can't (this be because client ip address has comprised the IP address of each network segment of the Internet in client configuration main frame or network segment route, set main frame or network segment route and need specify specific I P address or network segment coupling source IP address, can't contain all client ips with several routes), (this is because default route does not need to specify specific I P address or network segment coupling source IP address and can only handle with default route, can contain all clients with a default route), so the default route of real server must point to load equalizer.Because by default route, server ip address need be set to the identical network segment with IP address, load equalizer rear end, routing iinformation can only obtain according to 2 layer MAC address, thus must with load equalizer 2 layer intercommunications.
2 layer intercommunications are the i.e. connection of data link layer in the OSI network model also, this will cause all real servers all in a broadcast domain, if all real servers are not all in a broadcast domain, for example at the VLAN of different switches (Virtual LAN, VLAN) in, need to beat VLAN Trunk (VLAN Trunk is the technology that is connected main frame 2 layer intercommunications among the identical VLAN on the different switches that allows) so, and will be on same network interface card of the real server in rear end a plurality of IP of binding address by 3 layer intercommunications of tactful route implementing and load equalizer.This will make the complicated of the network topology structure of machine room and RS configuration, thereby cause difficult in maintenance.
Solution to the problems described above is to adopt 7 layers of load-balancing technique at present.But the method has been revised the IP address of client, be about to the IP address, rear end that client ip address is revised as 7 layers of load equalizer, make rear end RS can only see the IP address, rear end of 7 layers of load equalizer, owing to usually need client's behavior is analyzed, be based on that daily record carries out and client's behavior analyzed, adopt the method, RS can't see client ip address at all, that is to say, therefore the shadow that does not have client ip address in daily record causes the analysis of client's behavior is difficult to carry out.Other 7 layers of load balancing are that client ip address is put into HTTP head X-Forwarded-For option at the solution of this problem, need the application program on the RS of rear end to resolve the HTTP head, therefore need to revise application program and cause complexity to increase.
Summary of the invention
The main technical problem to be solved in the present invention provides a kind of load-balancing method and the load equalizer that can realize that the inter-network section is interconnected.
In order to address the above problem, the technical scheme of load-balancing method of the present invention is:
It uses conversational list to store client ip/port and empty IP/ port, and has increased the item of storage rear end IP/ port in described conversational list, comprises for entering the Data Stream Processing step:
Step (10), in this step (10) with receive from client enter packet enter datastream source IP/ port and to enter data flow purpose IP/ port be the described conversational list of index search;
If do not find, execution in step (20) selects a real server as destination server in this step (20); Otherwise execution in step (50);
In step (20) execution in step (30) afterwards, in this step (30), select rear end IP and rear end port;
Step (40) is set up clauses and subclauses according to selected destination server, rear end IP and rear end port in conversational list in this step (40), described clauses and subclauses comprise empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Step (50), in this step (50) according to the respective entries in the conversational list, described enter packet to enter data flow purpose IP/ port modifications be real server ip/port, and be the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port;
Step (60), in this step (60), calculate the described verification that enters packet and;
Step (70), in this step (70), will calculate verification and the packet that enters issue described real server.
The Data Stream Processing step comprises for going out:
Step (10 '), go out datastream source IP/ port and the data flow purpose IP/ port of going out with the packet of going out that receives from described real server in this step (10 ') are the described conversational list of index search;
If do not find, abandon the described packet of going out; Otherwise execution in step (20 '), in this step (20 '),, be the datastream source IP/ port modifications of going out of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also according to the respective entries in the described conversational list;
Step (30 '), in this step (30 '), calculate the described packet of going out verification and;
Step (40 '), in this step (40 '), will calculate verification and the packet of going out send to described client.
Wherein, described step (20) further comprises:
Step (201) is organized the IP address of all real servers and port and present load in this step (201) by tabulation;
Step (202) will adopt polling algorithm to select described real server ip address and port in turn in this step (202) in described tabulation.
In addition, described step (30) further comprises:
Step (301) adopts polling algorithm to select a rear end IP in step (301);
Step (302) adopts polling algorithm to select a rear end port in step (302);
Step (303) is searched selected rear end IP/ port in described conversational list in step (303), if find, then forward step (302) to.
Preferably, after described step (50), also comprise:
Step (51) is added on the client ip/port in the described respective entries in the datagram TCP head as new tcp option clauses and subclauses in step (51).
Described verification and comprise the IP header check and and the TCP header check and.
Described conversational list also comprises traffic statistics, spin lock and flag bit.
Correspondingly, the technical scheme of load equalizer of the present invention comprises:
The conversational list of storage client ip/port and empty IP/ port, described conversational list also comprise the item of storage rear end IP/ port, and described load equalizer also comprises the following unit that enters data flow that is used to handle:
Entering data flow and search the unit, is the described conversational list of index search with the datastream source IP/ port that enters that enters packet that receives from client with entering data flow purpose IP/ port;
Select real server unit, be used to select a real server as destination server;
Select rear end IP and rear end port unit, be used to select rear end IP and rear end port;
Set up entry elements, be used for setting up clauses and subclauses according to selected destination server, rear end IP and rear end port at conversational list, described clauses and subclauses comprise empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Enter data flow and revise the unit, be used for respective entries according to conversational list, described enter packet to enter data flow purpose IP/ port modifications be real server ip/port, and be the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port;
Enter the data flow verification unit, be used to calculate the described verification that enters packet and;
Enter the data flow transmitting element, be used for calculated verification and the packet that enters issue described real server; Wherein,
If describedly enter result that data flow searches the unit for not, then trigger the real server unit of described selection, revise the unit otherwise trigger the described data flow that enters;
The real server unit of described selection connects described selection rear end IP and rear end port unit, itself so that connect the described entry elements of setting up;
The described entry elements of setting up connects the described data flow that enters and revises the unit, itself so that connect and describedly connect the described data flow transmitting element that enters after entering the data flow verification unit.
In addition, load equalizer of the present invention also comprises the go out unit of data flow of following processing:
The data flow of going out is searched the unit, and go out datastream source IP/ port and the data flow purpose IP/ port of going out that are used for the packet of going out that receives from described real server are the described conversational list of index search;
The data flow of going out is revised the unit, being used for the respective entries according to described conversational list, is the datastream source IP/ port modifications of going out of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also;
The data flow of going out verification unit, be used to calculate described go out the packet verification and;
The data flow of going out transmitting element is used for the described packet of going out is sent to client; Wherein,
If the described data flow of going out is searched the result of unit for not, then abandon described packet, revise the unit otherwise trigger the described data flow of going out;
The described data flow of going out is revised the unit and is connected the described data flow verification unit of going out, itself so that connect the described data flow transmitting element of going out.
In addition, load equalizer of the present invention also comprises the tcp option adding device, is used for the client ip/port of described respective entries is added in the datagram TCP head as new tcp option clauses and subclauses.
Compared with prior art, the beneficial effect of load-balancing method of the present invention and load equalizer is:
At first, because the present invention has adopted twice NAT conversion, be SNAT and DNAT, realized that the inter-network section is interconnected, thereby do not need to adopt 7 layers of expensive load-balancing device, throw away existing equipment and carry out a large amount of HardwareUpgrings, also do not need to design complicated network topology and just can improve the network data flow disposal ability.
Secondly,, just can not obtain client ip address, be convenient to large-scale application program migration so application program does not need to revise because the present invention is added on datagram TCP head to client ip/port as new tcp option clauses and subclauses.
Description of drawings
Below with reference to the following description of being carried out in conjunction with the accompanying drawings so that understand present disclosure more thoroughly, in the accompanying drawings:
Fig. 1 is the prior art load-balancing method to the flow chart of the processing that enters data flow;
Fig. 2 is the flow chart of prior art load-balancing method to the processing of the data flow of going out;
Fig. 3 is a load-balancing method of the present invention to the flow chart of the processing that enters data flow;
Fig. 4 is the flow chart of load-balancing method of the present invention to the processing of the data flow of going out;
Fig. 5 is that load equalizer of the present invention is to entering the structural representation that data flow is handled;
Fig. 6 is the structural representation that load equalizer of the present invention is handled the data flow of going out;
Fig. 7 is the schematic diagram that comprises the example of two load equalizers.
Embodiment
To describe specific embodiments of the invention in detail below, but the present invention is not limited to following specific embodiment.
As shown in Figure 3, load-balancing method of the present invention uses conversational list to store client ip/port and empty IP/ port, increases the item of storage rear end IP/ port in described conversational list, and example is as shown in table 1 below:
c_ip:c_port | v_ip:v_port | b_ip:b_port | r_ip:r_port |
? | ? | ? | ? |
Table 1
From this conversational list as can be seen, it comprises (v_ip:v_port/c_ip:c_port/r_ip:r_port/b_ip:b_port), wherein, v_ip:v_port refers to empty IP/ port, c_ip:c_port refers to client ip/port, r_ip:r_port refers to real server ip/port, and b_ip:b_port refers to rear end IP/ port.
Comprise for entering the Data Stream Processing step:
Step 10) is the described conversational list of index search with the datastream source IP/ port that enters that enters packet that receives from client with entering data flow purpose IP/ port;
If do not find, execution in step 20) select a real server as destination server; Otherwise execution in step 50);
Step 30) selects rear end IP and rear end port;
Step 40) set up clauses and subclauses according to selected destination server, rear end IP and rear end port in conversational list, described clauses and subclauses comprise empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Step 50) according to the respective entries in the conversational list, described enter packet to enter data flow purpose IP/ port modifications be real server ip/port, and be the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port;
Step 60) calculate this verification that enters packet and;
Step 70) will calculate verification and the packet that enters issue described real server.
As shown in Figure 4, comprise for the Data Stream Processing step of going out:
Step 10 ') be the described conversational list of index search with go out datastream source IP/ port and the data flow purpose IP/ port of going out of the packet of going out that sends from described real server;
If do not find, abandon the described packet of going out, otherwise step 20 ') according to corresponding clauses and subclauses in the described conversational list, be the datastream source IP/ port modifications of going out of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also;
Step 30 ') calculate this packet of going out verification and;
Step 40 ') will calculate verification and the packet of going out send to client.
From as can be known above-mentioned, load-balancing method of the present invention has increased by one in the Session table, be used for storing rear end IP/ port, for entering data flow, is index search Session table with the datastream source IP/ port that enters that enters packet from the client reception with entering data flow purpose IP/ port, wherein, this enters datastream source IP/ port and is meant client ip/port (c_ip:c_port), this enters data flow purpose IP/ port and is meant empty IP/ port (v_ip:v_port), if in the Session table, found respective entries, show that then this connection exists, then directly carrying out twice NAT (Network Address Translate, network address translation) conversion according to this respective entries gets final product; Otherwise, just do not find respective entries at Session, then select a real server as destination server for the described packet that enters, and then selection rear end IP and rear end port, and with (client ip/port, empty IP/ port, real IP/ port, rear end IP/ port) be stored in the Session table as clauses and subclauses.Then, according to these clauses and subclauses in the Session table, is the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port, promptly carried out SNAT (Source Network Address Translate one time, the source network address conversion), but also be the described data flow purpose IP/ port modifications that enters that enters packet real IP/ port, promptly carried out DNAT (Destination Network Address Translate one time, the purpose network address translation), carried out twice NAT like this.At this moment, the datastream source IP/ port that enters that enters packet is a rear end IP/ port, entering data flow purpose IP/ port is real IP/ port, for the real server in rear end, it thinks that this enters packet and sends over from rear end IP/ port, it can't see client, but client ip/port is not to find nowhere but be present in the Session table.The present invention can also add this IP/ port information in the tcp option to, thereby makes the application program on the real server in rear end see client-side information.
For the data flow of going out, at first, go out datastream source IP/ port and the data flow purpose IP/ port of going out with the packet of going out that sends from described real server are the described conversational list of index search, wherein the datastream source IP/ port of going out of the packet of going out that should send from real server is real IP/ port, and the data flow of going out purpose IP/ port is a rear end IP/ port; If do not find, abandon the described packet of going out (can improve fail safe to a certain extent like this), otherwise according to corresponding clauses and subclauses in the described conversational list, the datastream source IP/ port modifications of going out of the described packet of going out be empty IP/ port and the data flow purpose IP/ port modifications of going out of the described packet of going out be client ip/port (for example, 4 byte datas of source IP address in the IP head of the described packet of going out are replaced with v_ip, 2 byte datas of source port replace with v_port in the TCP head, and 4 byte datas in purpose IP address in the IP head of the described packet of going out are replaced with c_ip, 2 byte datas of destination interface are revised as c_port), this has carried out SNAT+DNAT, thereby for client, it thinks to issue from empty IP/ port its packet.
Further, described step 20) comprising:
Step 201) the IP address of all real servers and port and present load are organized by tabulation;
Step 202) adopt polling algorithm to select described real server ip address and port in turn in described tabulation.
In addition, described step 30) further comprise:
Step 301) adopt polling algorithm to select a rear end IP;
Step 302) adopt polling algorithm to select a rear end port;
Step 303) in described conversational list, searches selected rear end IP/ port,, then forward step 302 to) if find.
Above-mentioned steps is the problem for fear of the rear end port collision, also is whether verification selected rear end port is occupied, if occupied then will reselect.
In addition, more as shown in Figure 3, in described step 50) after also comprise:
Step 51) client ip/port in the described respective entries is added in the datagram TCP head as new tcp option clauses and subclauses.
For verification and can comprise the IP header check and and the TCP header check and, and its computational methods see also standard RFC 791 and RFC 793.
In addition, described conversational list can also comprise traffic statistics, spin lock and flag bit or the like, and wherein said traffic statistics are used for client behavioural analysis and access control; Described spin lock and flag bit are used for the maintenance of conversational list.
In addition, client ip address in the described tcp option and port can be parsed from packet at real server end.
Correspondingly, the invention also discloses a kind of load equalizer, comprise the conversational list of storage client ip/port and empty IP/ port, described conversational list also comprises the item of storage rear end IP/ port, described load equalizer also comprises the following unit that enters data flow that is used to handle, as shown in Figure 5:
Entering data flow and search unit 1, is the described conversational list of index search with the datastream source IP/ port that enters that enters packet that receives from client with entering data flow purpose IP/ port;
Select real server unit 2, be used to select a real server as destination server;
Select rear end IP and rear end port unit 3, be used to select rear end IP and rear end port;
Set up entry elements 4, be used for setting up clauses and subclauses (v_ip:v_port/c_ip:c_port/r_ip:r_port/b_ip:b_port) at conversational list according to selected destination server, rear end IP and rear end port, wherein, v_ip:v_port refers to empty IP/ port, c_ip:c_port refers to client ip/port, r_ip:r_port refers to real server ip/port, and b_ip:b_port refers to rear end IP/ port;
Enter data flow and revise unit 5, be used for respective entries according to conversational list, described enter packet to enter data flow purpose IP/ port modifications be real server ip/port, and be the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port;
Enter data flow verification unit 6, be used to calculate this verification that enters packet and;
Enter data flow transmitting element 7, be used for calculated verification and the packet that enters issue described real server; Wherein,
If describedly enter result that data flow searches unit 1 for not, then trigger the real server unit 2 of described selection, revise unit 5 otherwise trigger the described data flow that enters;
The real server unit of described selection 2 connects described selection rear end IP and rear end port units 3, itself so that connect the described entry elements 4 of setting up;
The described entry elements 4 set up connects the described data flow that enters and revises unit 1, itself so that connect described data flow verification unit 6 backs that enter and connect the described data flow transmitting element 7 that enters.
From as can be known above-mentioned, load equalizer of the present invention enter data flow search unit 1 with receive from client enter packet enter datastream source IP/ port and to enter data flow purpose IP/ port be the described conversational list of index search, wherein this to enter datastream source IP/ port be client ip/port, this enters data flow purpose IP/ port is empty IP/ port.If find then trigger the described data flow that enters and revise unit 5, be used for carrying out twice NAT conversion according to the respective entries that in conversational list, finds, if do not find then the real server unit 2 of triggering selection, the real server unit 2 of described selection selects a real server as destination server, then by selecting rear end IP and rear end port unit 3 to select rear end IP and rear end port, determined after the IP/ port of rear end, in conversational list, set up clauses and subclauses (v_ip:v_port/c_ip:c_port/r_ip:r_port/b_ip:b_port) by setting up entry elements 4, the described data flow modification unit 5 that enters is revised as real server ip/port to the described data flow purpose IP/ port (being empty IP/ port this moment) that enters that enters packet, and the described datastream source IP/ port (this moment for client ip/port) that enters that enters packet is revised as rear end IP/ port, promptly carried out twice NAT (SNAT+DNAT) conversion, at this moment, for real server, it thinks that this enters packet and sends over from rear end IP/ port.This is entered packet issue described real server by entering data flow transmitting element 7 by entering data flow verification unit 6 calculation checks and back then.
As shown in Figure 6, load equalizer of the present invention also comprises the go out unit of data flow of following processing:
The data flow of going out is searched unit 8, and go out datastream source IP/ port and the data flow purpose IP/ port of going out that are used for the packet of going out that receives from described real server are the described conversational list of index search;
The data flow of going out is revised unit 9, is used for according to the corresponding clauses and subclauses of described conversational list, and be the source IP/ port modifications of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also;
The data flow of going out verification unit 10, be used to calculate this packet of going out verification and;
The data flow of going out transmitting element 11, be used for calculated verification and the packet of going out send to client; Wherein,
If the described data flow of going out is searched the result of unit 8 for not, then abandon the described packet of going out, revise unit 9 otherwise trigger the described data flow of going out;
The described data flow of going out is revised unit 9 and is connected the described data flow verification unit 10 of going out, itself so that connect the described data flow transmitting element 11 of going out.
From as can be known above-mentioned, for the data flow of going out, searching unit 8 by the described data flow of going out is the described conversational list of index search with the datastream source IP/ port of going out (be r_ip:r_port this moment) and the data flow purpose IP/ port (b_ip:b_port) of going out of the packet of going out that receives from described real server; If do not find then abandon this packet of going out, otherwise the described data flow of going out is revised unit 9 according to the respective entries in the conversational list that is found, is the datastream source IP/ port modifications of going out of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also, promptly carries out twice NAT (SNAT+DNAT) conversion; This moment, the packet that returns sended over from its desired destination end (empty IP/ port) for client.Then, the data flow of going out verification unit 10 calculate these packets of going out verification and, by the data flow transmitting element 11 of going out the described packet of going out is sent to client at last.
Again as shown in Figure 6, load equalizer of the present invention also comprises tcp option adding device 12, is used for the client ip/port of described respective entries is added in the datagram TCP head as new tcp option clauses and subclauses.
Correspondingly, can insert the tcp option resolution unit at real server end, this TCT option resolution unit is used for the client ip address and the port of described tcp option are parsed from packet, gives the application program on the real server.
Be deployed as example with two load equalizers below and describe technical scheme of the present invention.
As shown in Figure 7, comprise two load equalizers (LB_A and LB_B), safeguard virtual address VIP 210.77.19.23 by heartbeat (VRRP agreement), the rear end network interface card of these two LB is connected to different inside network segment 10.13.65.x/24 and 10.13.66.x/24, the interface IP of configuration is 10.13.65.1 and 10.13.66.1, and these two LB dispose rear end address pool 10.13.65.128~10.13.65.254 and 10.13.66.128~10.13.66.254.The real server (RS) of rear end, be also referred to as Web server and be in redundant consideration, all dispose two network interface cards, the interface IP of two network interface cards (also being service IP simultaneously) is in respectively in 10.13.65.X/24 and two network segments of 10.13.65.x/24, the interface IP address of shown in the figure 4 real servers is respectively 10.13.65.2~10.13.65.5 and 10.13.66.2~10.13.66.5 (4 station servers, for redundant configuration two network interface cards).Suppose LB_A in running order (VIP is on LB_A) in this example, its serve port is 80, and the serve port of the real server in rear end is 8080.
Switch is meant the network equipment of realizing 2 layers and 3 layers interconnecting function among the figure, and the effect in network topology is that server, LB are coupled together the realization intercommunication.
It is as shown in table 2,
c_ip:c_port | v_ip:v_port | ?b_ip:b_port | r_ip:r_port | Traffic statistics | Other |
66.249.89.105:236 | 210.77.19.23:80 | ?10.13.65.128:2000 | 10.13.65.2.8080 | _ | _ |
Table 2
The Session table that this load equalizer has comprises four item client ip/ports, empty IP/ port, rear end IP/ port, real server ip/port at least.Also comprise traffic statistics and other in addition.Clauses and subclauses only are shown in this table 2, certainly, can have a plurality of clauses and subclauses.
As follows for entering Data Stream Processing:
At first, received the packet of sending from client, the source IP/ port of this packet is 66.249.89.105:236, and purpose IP/ port is 210.77.19.23:80.If initial the connection, this packet is for having the connection request bag of SYN (Synchronize is synchronous) sign so; If the general data bag then has ACK (Acknowledge replys) sign; If the connection closed request then has FIN (Finish finishes) sign.Be made as initial connection in this official holiday, promptly this packet has the SYN sign.
Suppose that this moment, the Session table was for empty, so be to search this Session table with 210.77.19.23:80 and 66.249.89.105:236 so.Search less than these clauses and subclauses for this example, then use RR (Round Robin, poll strategy) algorithm to select a real server, can certainly use other selection strategy to select as destination server, suppose to have selected real server RS1,10.13.65.2:8080.Then can also use the RR algorithm from IP pond, rear end, to select a rear end IP, suppose that the IP address of selecting, rear end is 10.13.65.128.Select after the IP of rear end, select the rear end port, avoid port collision (occupied), still use the RR algorithm to select successively at this moment, establish and selected port 2000 for minimum probability ground.If search the Session table, can find the clauses and subclauses of 10.13.65.128:2000 and 10.13.65.2:8080, illustrate that then the rear end port is occupied, need to change port, that just need reselect the rear end port.Suppose in the Session table, not find in this example, that is to say that port 2000 can use.
Next, in Session table, insert clauses and subclauses (210.77.19.23:80,66.249.89.105:236,10.13.65.2:8080,10.13.65.128:2000), as shown in table 2.According to the clauses and subclauses in the Session table, being the purpose IP/ port modifications of this packet service IP and the serve port of RS1 then, is source IP/ port modifications rear end IP/ port.In this example the purpose IP/ port 210.77.19.23:80 of this packet is revised as 10.13.65.2:8080,66.249.89.105:236 is revised as 10.13.65.128:2000 IP/ address, source.
Following calculation check and, comprise the IP header check and and the TCP header check and.
At last this packet is sent to selected real server RS1.This reality server RS1 has received the connection request packet that has the SYN sign, should will return the reply data bag that has the SYN/ACK sign by reality server RS1 then.
For the data flow of going out, load equalizer of the present invention is handled as follows:
Suppose that real server RS1 has sent the reply data bag, the source IP/ port of this packet is 10.13.65.2:8080, and the purpose IP/ port of this packet is 10.13.65.128:2000.With 10.13.65.2:8080 and 10.13.65.128:2000 is index search Session table, if can not find then abandon this packet.Found corresponding clauses and subclauses in this example, as shown in table 2.Then, according to the clauses and subclauses that found, be the source IP/ port modifications of this packet the empty IP/ port of virtual server, be purpose IP/ port modifications client ip/port.Being 210.77.19.23:80 with source IP/ port modifications in this example, is purpose IP/ port modifications 66.249.89.105:236.Then calculation check and, comprise the IP header check and and the TCP header check and.At last this packet is sent to client.
In sum, because the present invention has been by having adopted twice NAT conversion, promptly SNAT and DNAT have realized Layer 4 load balancing, thereby do not need to throw away existing equipment and carry out a large amount of HardwareUpgrings and just can improve the network data flow disposal ability.
Secondly, because the present invention is added on the datagram tcp option to client ip/port as new tcp option clauses and subclauses, it can be resolved by the ICP/IP protocol stack on the RS of rear end, has avoided revising the complexity of application program, and can support the application layer protocol except that HTTP.
In addition, the present invention utilizes tcp option to carry client ip/port, makes the application program of moving on the real server after twice NAT, still can obtain the link information of client without modification.
Though the above-mentioned specific embodiments of the invention of having described in conjunction with the accompanying drawings, those skilled in the art can carry out various changes, modification and equivalent substitution to the present invention under the situation that does not break away from the spirit and scope of the present invention.These changes, modification and equivalent substitution all mean and fall within the spirit and scope that claim limited of enclosing.
Claims (10)
1. a load-balancing method uses conversational list to store client ip/port and empty IP/ port, it is characterized in that, increases the item of storage rear end IP/ port in described conversational list, comprises for entering the Data Stream Processing step:
Step (10), in this step (10) with receive from client enter packet enter datastream source IP/ port and to enter data flow purpose IP/ port be the described conversational list of index search;
If do not find, execution in step (20) selects a real server as destination server in this step (20); Otherwise execution in step (50);
In step (20) execution in step (30) afterwards, in this step (30), select rear end IP and rear end port;
Step (40) is set up clauses and subclauses according to selected destination server, rear end IP and rear end port in conversational list in this step (40), described clauses and subclauses comprise empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Step (50), in this step (50) according to the respective entries in the conversational list, described enter packet to enter data flow purpose IP/ port modifications be real server ip/port, and be the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port;
Step (60), in this step (60), calculate the described verification that enters packet and;
Step (70), in this step (70), will calculate verification and the packet that enters issue described real server.
2. load-balancing method as claimed in claim 1 is characterized in that, the Data Stream Processing step comprises for going out:
Step (10 '), go out datastream source IP/ port and the data flow purpose IP/ port of going out with the packet of going out that receives from described real server in this step (10 ') are the described conversational list of index search;
If do not find, abandon the described packet of going out; Otherwise execution in step (20 '), in this step (20 '),, be the datastream source IP/ port modifications of going out of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also according to the respective entries in the described conversational list;
Step (30 '), in this step (30 '), calculate the described packet of going out verification and;
Step (40 '), in this step (40 '), will calculate verification and the packet of going out send to described client.
3. load-balancing method as claimed in claim 2 is characterized in that, described step (20) further comprises:
Step (201) is organized the IP address of all real servers and port and present load in this step (201) by tabulation;
Step (202) will adopt polling algorithm to select described real server ip address and port in turn in this step (202) in described tabulation.
4. load-balancing method as claimed in claim 3 is characterized in that, described step (30) further comprises:
Step (301) adopts polling algorithm to select a rear end IP in step (301);
Step (302) adopts polling algorithm to select a rear end port in step (302);
Step (303) is searched selected rear end IP/ port in described conversational list in step (303), if find, then forward step (302) to.
5. as each described load-balancing method of claim 1 to 3, it is characterized in that, after described step (50), also comprise:
Step (51) is added on the client ip/port in the described respective entries in the datagram TCP head as new tcp option clauses and subclauses in step (51).
6. load-balancing method as claimed in claim 5 is characterized in that, described verification and comprise the IP header check and and the TCP header check and.
7. load-balancing method as claimed in claim 6 is characterized in that described conversational list also comprises traffic statistics, spin lock and flag bit.
8. load equalizer comprises the conversational list of storage client ip/port and empty IP/ port it is characterized in that described conversational list also comprises the item of storage rear end IP/ port, and described load equalizer also comprises the following unit that enters data flow that is used to handle:
Entering data flow and search the unit, is the described conversational list of index search with the datastream source IP/ port that enters that enters packet that receives from client with entering data flow purpose IP/ port;
Select real server unit, be used to select a real server as destination server;
Select rear end IP and rear end port unit, be used to select rear end IP and rear end port;
Set up entry elements, be used for setting up clauses and subclauses according to selected destination server, rear end IP and rear end port at conversational list, described clauses and subclauses comprise empty IP/ port, client ip/port, real server ip/port and rear end IP/ port;
Enter data flow and revise the unit, be used for respective entries according to conversational list, described enter packet to enter data flow purpose IP/ port modifications be real server ip/port, and be the described datastream source IP/ port modifications that enters that enters packet rear end IP/ port;
Enter the data flow verification unit, be used to calculate the described verification that enters packet and;
Enter the data flow transmitting element, be used for calculated verification and the packet that enters issue described real server; Wherein,
If describedly enter result that data flow searches the unit for not, then trigger the real server unit of described selection, revise the unit otherwise trigger the described data flow that enters;
The real server unit of described selection connects described selection rear end IP and rear end port unit, itself so that connect the described entry elements of setting up;
The described entry elements of setting up connects the described data flow that enters and revises the unit, itself so that connect and describedly connect the described data flow transmitting element that enters after entering the data flow verification unit.
9. load equalizer as claimed in claim 8 is characterized in that, also comprises the go out unit of data flow of following processing:
The data flow of going out is searched the unit, and go out datastream source IP/ port and the data flow purpose IP/ port of going out that are used for the packet of going out that receives from described real server are the described conversational list of index search;
The data flow of going out is revised the unit, being used for the respective entries according to described conversational list, is the datastream source IP/ port modifications of going out of the described packet of going out that empty IP/ port is the data flow purpose IP/ port modifications of going out of the described packet of going out client ip/port also;
The data flow of going out verification unit, be used to calculate described go out the packet verification and;
The data flow of going out transmitting element, be used for calculated verification and the packet of going out send to client; Wherein,
If the described data flow of going out is searched the result of unit for not, then abandon described packet, revise the unit otherwise trigger the described data flow of going out;
The described data flow of going out is revised the unit and is connected the described data flow verification unit of going out, itself so that connect the described data flow transmitting element of going out.
10. load equalizer as claimed in claim 9 is characterized in that, also comprises the tcp option adding device, is used for the client ip/port of described respective entries is added in the datagram TCP head as new tcp option clauses and subclauses.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010184118.6A CN102255932B (en) | 2010-05-20 | 2010-05-20 | Load-balancing method and load equalizer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201010184118.6A CN102255932B (en) | 2010-05-20 | 2010-05-20 | Load-balancing method and load equalizer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102255932A true CN102255932A (en) | 2011-11-23 |
CN102255932B CN102255932B (en) | 2015-09-09 |
Family
ID=44982926
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201010184118.6A Active CN102255932B (en) | 2010-05-20 | 2010-05-20 | Load-balancing method and load equalizer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102255932B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103023942A (en) * | 2011-09-27 | 2013-04-03 | 奇智软件(北京)有限公司 | Load balancing method, device and system of server |
CN103297552A (en) * | 2012-03-02 | 2013-09-11 | 百度在线网络技术(北京)有限公司 | Method and device for transmitting IPv4 address and port of client-side to back-end server |
CN103297407A (en) * | 2012-03-02 | 2013-09-11 | 百度在线网络技术(北京)有限公司 | Method and device for transmitting IPv6 address and port of client-side to back-end server |
CN103368841A (en) * | 2012-03-29 | 2013-10-23 | 深圳市腾讯计算机系统有限公司 | Message forwarding method and device thereof |
CN103491016A (en) * | 2012-06-08 | 2014-01-01 | 百度在线网络技术(北京)有限公司 | Method, system and device for transferring source address in UDP load balancing system |
CN103491065A (en) * | 2012-06-14 | 2014-01-01 | 中兴通讯股份有限公司 | Transparent proxy and transparent proxy realization method |
CN103491053A (en) * | 2012-06-08 | 2014-01-01 | 北京百度网讯科技有限公司 | UDP load balancing method, UDP load balancing system and UDP load balancing device |
CN107786669A (en) * | 2017-11-10 | 2018-03-09 | 华为技术有限公司 | A kind of method of load balance process, server, device and storage medium |
CN108156040A (en) * | 2018-01-30 | 2018-06-12 | 北京交通大学 | A kind of central control node in distribution cloud storage system |
CN108769291A (en) * | 2018-06-22 | 2018-11-06 | 北京云枢网络科技有限公司 | A kind of message processing method, device and electronic equipment |
CN109729104A (en) * | 2019-03-19 | 2019-05-07 | 北京百度网讯科技有限公司 | Client source address acquiring method, device, server and computer-readable medium |
CN110166570A (en) * | 2019-06-04 | 2019-08-23 | 杭州迪普科技股份有限公司 | Service conversation management method, device, electronic equipment |
CN113923202A (en) * | 2021-10-18 | 2022-01-11 | 成都安恒信息技术有限公司 | Load balancing method based on HTTP cluster server |
CN115118638A (en) * | 2022-06-29 | 2022-09-27 | 济南浪潮数据技术有限公司 | Method, device and medium for monitoring back-end network card |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040268358A1 (en) * | 2003-06-30 | 2004-12-30 | Microsoft Corporation | Network load balancing with host status information |
CN101018206A (en) * | 2007-02-14 | 2007-08-15 | 华为技术有限公司 | Packet message processing method and device |
CN101136929A (en) * | 2007-10-19 | 2008-03-05 | 杭州华三通信技术有限公司 | Internet small computer system interface data transmission method and apparatus |
CN101136851A (en) * | 2007-09-29 | 2008-03-05 | 华为技术有限公司 | Stream forwarding method and equipment |
-
2010
- 2010-05-20 CN CN201010184118.6A patent/CN102255932B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040268358A1 (en) * | 2003-06-30 | 2004-12-30 | Microsoft Corporation | Network load balancing with host status information |
CN101018206A (en) * | 2007-02-14 | 2007-08-15 | 华为技术有限公司 | Packet message processing method and device |
CN101136851A (en) * | 2007-09-29 | 2008-03-05 | 华为技术有限公司 | Stream forwarding method and equipment |
CN101136929A (en) * | 2007-10-19 | 2008-03-05 | 杭州华三通信技术有限公司 | Internet small computer system interface data transmission method and apparatus |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103023942A (en) * | 2011-09-27 | 2013-04-03 | 奇智软件(北京)有限公司 | Load balancing method, device and system of server |
CN103023942B (en) * | 2011-09-27 | 2016-08-03 | 北京奇虎科技有限公司 | A kind of server load balancing method, Apparatus and system |
CN103297552A (en) * | 2012-03-02 | 2013-09-11 | 百度在线网络技术(北京)有限公司 | Method and device for transmitting IPv4 address and port of client-side to back-end server |
CN103297407A (en) * | 2012-03-02 | 2013-09-11 | 百度在线网络技术(北京)有限公司 | Method and device for transmitting IPv6 address and port of client-side to back-end server |
CN103297552B (en) * | 2012-03-02 | 2016-05-25 | 百度在线网络技术(北京)有限公司 | Transmit client ip v4 address and port method and the device to back-end server |
CN103297407B (en) * | 2012-03-02 | 2016-05-25 | 百度在线网络技术(北京)有限公司 | Transmit client ip v6 address and port method and the device to back-end server |
CN103368841B (en) * | 2012-03-29 | 2016-08-17 | 深圳市腾讯计算机系统有限公司 | Message forwarding method and device |
CN103368841A (en) * | 2012-03-29 | 2013-10-23 | 深圳市腾讯计算机系统有限公司 | Message forwarding method and device thereof |
CN103491016A (en) * | 2012-06-08 | 2014-01-01 | 百度在线网络技术(北京)有限公司 | Method, system and device for transferring source address in UDP load balancing system |
CN103491053A (en) * | 2012-06-08 | 2014-01-01 | 北京百度网讯科技有限公司 | UDP load balancing method, UDP load balancing system and UDP load balancing device |
CN103491065A (en) * | 2012-06-14 | 2014-01-01 | 中兴通讯股份有限公司 | Transparent proxy and transparent proxy realization method |
CN103491065B (en) * | 2012-06-14 | 2018-08-14 | 南京中兴软件有限责任公司 | A kind of Transparent Proxy and its implementation |
CN107786669A (en) * | 2017-11-10 | 2018-03-09 | 华为技术有限公司 | A kind of method of load balance process, server, device and storage medium |
CN107786669B (en) * | 2017-11-10 | 2021-06-22 | 华为技术有限公司 | Load balancing processing method, server, device and storage medium |
CN108156040A (en) * | 2018-01-30 | 2018-06-12 | 北京交通大学 | A kind of central control node in distribution cloud storage system |
CN108769291A (en) * | 2018-06-22 | 2018-11-06 | 北京云枢网络科技有限公司 | A kind of message processing method, device and electronic equipment |
CN109729104A (en) * | 2019-03-19 | 2019-05-07 | 北京百度网讯科技有限公司 | Client source address acquiring method, device, server and computer-readable medium |
CN110166570A (en) * | 2019-06-04 | 2019-08-23 | 杭州迪普科技股份有限公司 | Service conversation management method, device, electronic equipment |
CN110166570B (en) * | 2019-06-04 | 2022-06-28 | 杭州迪普科技股份有限公司 | Service session management method and device, and electronic device |
CN113923202A (en) * | 2021-10-18 | 2022-01-11 | 成都安恒信息技术有限公司 | Load balancing method based on HTTP cluster server |
CN113923202B (en) * | 2021-10-18 | 2023-10-13 | 成都安恒信息技术有限公司 | Load balancing method based on HTTP cluster server |
CN115118638A (en) * | 2022-06-29 | 2022-09-27 | 济南浪潮数据技术有限公司 | Method, device and medium for monitoring back-end network card |
Also Published As
Publication number | Publication date |
---|---|
CN102255932B (en) | 2015-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102255932A (en) | Load balancing method and load equalizer | |
JP5214007B2 (en) | Addressing method, addressing device, fabric manager, switch, and data routing method | |
Chen et al. | Survey on routing in data centers: insights and future directions | |
CN102075445B (en) | Load balancing method and device | |
CN101601232B (en) | Triple-tier anycast addressing | |
KR101669700B1 (en) | Agile data center network architecture | |
US7290059B2 (en) | Apparatus and method for scalable server load balancing | |
US20140177639A1 (en) | Routing controlled by subnet managers | |
US9325526B2 (en) | Mechanism for enabling layer two host addresses to be shielded from the switches in a network | |
CN104618243B (en) | Method for routing, apparatus and system, Scheduling of Gateway method and device | |
CN104038425B (en) | The method and apparatus for forwarding ether network packet | |
CN104734955A (en) | Network function virtualization implementation method, wide-band network gateway and control device | |
CN101827039B (en) | Method and equipment for load sharing | |
WO2012149867A1 (en) | Data center network system | |
WO2012149857A1 (en) | Routing method for data center network system | |
CN100531215C (en) | Method for realizing multiple network device link aggregation | |
CN100414936C (en) | Method for balancing load between multi network cards of network file system server | |
CN103297354B (en) | Server interlinkage system, server and data forwarding method | |
CN1455347A (en) | Distributed parallel scheduling wide band network server system | |
CN106716870A (en) | Local packet switching at a satellite device | |
JP5437290B2 (en) | Service distribution method, service distribution device, and program | |
CN101420371B (en) | Dynamic function supporting method and system for ASIC fusion network device | |
CN113329048B (en) | Cloud load balancing method and device based on switch and storage medium | |
Dan et al. | SOPA: source routing based packet-level multi-path routing in data center networks | |
Liang et al. | Dynamic flow scheduling technique for load balancing in fat-tree data center networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |