CN106302225B - A kind of method and apparatus of server load balancing - Google Patents

A kind of method and apparatus of server load balancing Download PDF

Info

Publication number
CN106302225B
CN106302225B CN201610906359.4A CN201610906359A CN106302225B CN 106302225 B CN106302225 B CN 106302225B CN 201610906359 A CN201610906359 A CN 201610906359A CN 106302225 B CN106302225 B CN 106302225B
Authority
CN
China
Prior art keywords
server
address
port
host
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610906359.4A
Other languages
Chinese (zh)
Other versions
CN106302225A (en
Inventor
徐亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Excellent Polytron Technologies Inc
Original Assignee
Excellent Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Excellent Polytron Technologies Inc filed Critical Excellent Polytron Technologies Inc
Priority to CN201610906359.4A priority Critical patent/CN106302225B/en
Publication of CN106302225A publication Critical patent/CN106302225A/en
Application granted granted Critical
Publication of CN106302225B publication Critical patent/CN106302225B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention provides a kind of method and apparatus of server load balancing, by openflow flow table by real server treated reply data packet in source port be changed to the port of load dispatch server, to simply solve the problems, such as that load dispatch server and real server port are inconsistent.Since reply data packet needs not move through load dispatch server and it is sent directly to client, reduces apparatus of load, can carry out mass data to high-performance and handle up.

Description

A kind of method and apparatus of server load balancing
Technical field
The present invention relates to Virtual Services under network virtual technology and load-balancing technique more particularly to Network character The method and apparatus of the load balancing of device cluster.
Background technique
Load-balancing technique (Load Balance) be exactly will load (task) to share multiple operating units enterprising Row executes, to complete the technology of task jointly.In network application, it provides a kind of cheap effectively transparent method Carry out the bandwidth of extended network equipment and server, increase handling capacity, Strengthens network data-handling capacity improves the flexibility of network And availability.The load balancing of server is most common among these.Since the data access flow of network increases rapidly, and take The growth for device processor speed and the internal storage access speed of being engaged in but well below the growth of network bandwidth and application service, server at For network bottleneck.For this purpose, traffic balancing is carried out between server using load-balancing technique by setting up server cluster, As a kind of low cost, expansible effective solution.
Linux virtual server (Linux Virtual Server, LVS) load balancing is server load balancing One of.LVS is mainly used for the load balancing of multiserver, works in network layer, the clothes of high-performance, High Availabitity may be implemented Business device Clustering.
LVS load balancing mainly includes client, load dispatch server and real server (RealServer, RS). Wherein, the most important function of load dispatch server is to provide data packet forwarding and load balancing, load dispatch server virtual The virtual ip address (vip) of one Outside Access reaches load dispatch server, load dispatch service when user accesses vip Device returns to client data according to certain rule selection one RS, RS processing after the completion.
In LVS load balancing, according to pass-through mode, it common are and virtual server is realized by direct routing technology (Virtual Server via Direct Routing, VS/DR) mode and passes through NAT technology and realize virtual clothes Device (Virtual Server via Network Address Translation, the VS/NAT) mode of business.
In VS/NAT mode, the destination address (i.e. vip) in data message is changed to specifically by load dispatch server Certain RS, port are also changed to the port of RS, message are then issued RS.After RS has handled data, need to return to load dispatch Server, then load dispatch server by data packet source address and source port be changed to the address and port of vip, last handle Data are sent.Which networking flexibility, back-end server can be in different physical locations, different local area networks. However, data forwarding is required through load dispatch server, therefore load dispatch server becomes the new bottleneck of system.
In VS/DR mode, load dispatch server passes through the hardware address of overwrite request message, transmits the request to RS, and response is directly returned to client by RS.Since the data of RS are without load dispatch server, it is negative to reduce equipment The problem of carrying, alleviating equipment bottleneck in VS/NAT mode.However, load dispatch server and RS need in VS/DR mode Port having the same, the problem of thus bringing, are: limiting load dispatch server and the deployment and cascade system of RS, load Dispatch server cannot provide service for the RS across port, can only use more flat network topology, to limit to significantly Network topology.
Summary of the invention
For above the problems of the prior art, it is an object of the invention to provide one in the environment of network virtualization The method and apparatus of kind server load balancing, can not be limited by equipment bottleneck, carry out mass data to high-performance and handle up, and Being able to use has the case where different port in load dispatch server and RS.
According to an aspect of the invention, there is provided a kind of method of server load balancing, plurality of real service Device is operated in the form of virtual machine at least one first host, comprising steps of load dispatch server is received from visitor The request data package at family end;Load dispatch server determines the real service of processing request data package according to load-balancing algorithm Device;Target port in request data package is converted to the port of real server by load dispatch server, by target hardware Location is converted to the first host hardware address;Request data package after conversion is sent to processing number of request by load dispatch server According to the first host of the real server of packet;First host uses openflow agreement, and the first host is sent out to client Send real server treated and reply data packet, the first host by openflow flow table with will replying the source in data packet Location port translation is load dispatch server port.
Preferably, openflow flow table is made of a plurality of flow entry, wherein including the flow entry of conversion port, the first host Machine receives the reply data packet of real server, and extracts second feature from replying in data packet, by second feature and a plurality of stream The matching field of list item is matched, and when matching when the matching field in second feature and the flow entry of conversion port, is then executed The source port replied in data packet is converted to load dispatch Service-Port by the instruction field in the flow entry of conversion port.
Preferably, second feature includes target ip address, source port, and the matching field in the flow entry of conversion port includes Client ip address, real server port.
Preferably, request data package of the load dispatch server based on address resolution protocol broadcast reception from client, First host is answered by address resolution protocol broadcast progress generation of the openflow flow table to the request data package from client, To shield the response of ARP information broadcast of the real server to request data package.
Preferably, openflow flow table is made of a plurality of flow entry, wherein answering the flow entry of instruction, the first host comprising generation Machine receives the request data package from client, and fisrt feature is extracted from request data package, by fisrt feature and a plurality of stream The matching field of list item is matched, and when matching when the matching field in the flow entry that fisrt feature answers instruction with generation, is then executed In generation, answers the instruction field in the flow entry of instruction, returns to client for answer evidence.
Preferably, fisrt feature includes target ip address, and whether data packet includes ARP header, sourcesink host IP address, generation Answering the matching field in the flow entry of instruction includes load dispatch server virtual IP address, ARP header, client host IP Address.
Preferably, the first host abandons the address resolution protocol letter for the request data package that the client received is sent Breath, responds the ARP information of request data package to shield real server.
Preferably, the method for above-mentioned server load balancing further comprises the steps of: the first host installation QEMU guest Agent carries out virtual ip address setting to real server therein is run by QEMU guest agent.
Preferably, the first host calls ip address add order to add virtual IP address by QEMU guest agent Address.
Preferably, the first host modifies the network card configuration file in operating system by QEMU guest agent, Virtual ip address is added in network card configuration file.
According to another aspect of the present invention, a kind of device of server load balancing is provided, real server is with void The form of quasi- machine operates in the device of server load balancing, and the device of server load balancing uses openflow agreement, The device of server load balancing includes: port translation module, and port translation module receives the reply data packet of real server, It is load dispatch server end that port translation module, which will reply the source address port translation in data packet by openflow flow table, Mouthful;Packet sending module, packet sending module will reply data packet and send to client.
Preferably, openflow flow table is made of a plurality of flow entry, wherein including the flow entry of conversion port, port translation Module extracts second feature from replying in data packet, second feature is matched with the matching field of a plurality of flow entry, when the When two features are matched with the matching field in the flow entry of conversion port, then the coding line in the flow entry of conversion port is executed The source port replied in data packet is converted to load dispatch Service-Port by section.
Preferably, second feature includes target ip address, source port, and the matching field in the flow entry of conversion port includes Client ip address, real server port.
Preferably, the device of above-mentioned server load balancing further include: generation answers instruction module, and generation answers instruction module and receive From the request data package of client, in generation, answers ground of the instruction module by openflow flow table to the request data package from client Location analysis protocol broadcast is carried out for answering, to shield ARP information broadcast of the real server to request data package Response.
Preferably, openflow flow table is made of a plurality of flow entry, wherein answering the flow entry of instruction comprising generation, generation answers instruction Module extracts fisrt feature from request data package, and fisrt feature is matched with the matching field of a plurality of flow entry, when When one feature for the matching field answered in the flow entry of instruction with matching, then execute for the coding line answered in the flow entry of instruction Section is returned to client for answer evidence.
Preferably, fisrt feature includes target ip address, and whether data packet includes ARP header, sourcesink host IP address, generation Answering the matching field in the flow entry of instruction includes load dispatch server virtual IP address, ARP header, client host IP Address.
According to the third aspect of the invention we, a kind of method that real server IP address is set, real server are provided It is operated in the form of virtual machine in the first host, the first host installs QEMUguest agent, passes through QEMU guest Agent carries out virtual ip address setting to real server therein is run.
Preferably, the first host calls ip address add order to add virtual IP address by QEMU guest agent Address.
Preferably, the first host modifies the network card configuration file in operating system by QEMU guest agent, Virtual ip address is added in network card configuration file.
In the present invention the first host by openflow flow table by real server treated reply data packet in Source port is changed to the port of load dispatch server, to simply solve load dispatch server and real server port Inconsistent problem.Further, since the present invention is changed by the setting of the first host rather than by real server It becomes the source port in complex data packet again, therefore avoids the complicated setting of real server operator.
Since reply data packet needs not move through load dispatch server and it is sent directly to client, reduces and sets Standby load, can carry out mass data to high-performance and handle up.
Detailed description of the invention
Technical solution of the present invention is described in detail below in conjunction with the drawings and specific embodiments, so that of the invention Characteristics and advantages become apparent.
Fig. 1 is the data flow schematic diagram for implementing one embodiment of the method for the present invention;
Fig. 2 is the flow diagram of the bright method of this law;
Fig. 3 is the detailed process schematic diagram of the S105 step in Fig. 2 of the invention.
Specific embodiment
Detailed description will be provided to the embodiment of the present invention below.Although the present invention will combine some specific embodiments It is illustrated and illustrates, but should be noted that the present invention is not merely confined to these embodiments.On the contrary, to the present invention The modification or equivalent replacement of progress, are intended to be within the scope of the claims of the invention.
Some exemplary embodiments are described as the processing or method described as flow chart.Although flow chart grasps items It is described into the processing of sequence, but many of these operations can be implemented concurrently, concomitantly or simultaneously.In addition, each The sequence of item operation can be rearranged.The processing can be terminated when its operations are completed, it is also possible to have not Including additional step in the accompanying drawings.The processing can correspond to method, function, regulation, subroutine, subprogram etc..
Although it should be understood that may have been used term " first ", " second " etc. herein to describe each unit Or data, but these units or data should not be limited by these terms.The use of these items is only for by one Data are distinguished with another data.For example, without departing substantially from the range of exemplary embodiment, fisrt feature Second feature can be referred to as, and similarly second feature can be referred to as fisrt feature.
A kind of method of server load balancing provided by the present invention is the improvement in traditional VS/DR mode, because Also there is the real server (RS) in traditional VS/DR mode will directly reply the step of data packet sends back to client for this, by In RS data without load dispatch server, therefore reduce apparatus of load, can carry out mass data to high-performance and gulp down It spits.Meanwhile method provided by the invention can easily solve load dispatch server to provide service for the RS across port The problem of.
First below in conjunction with attached drawing, illustrate the principle and data flow of method of the invention.
Fig. 1 is the data flow schematic diagram for implementing one embodiment of the method for the present invention.As shown in Figure 1, VS/DR server It include load dispatch server 102 and RS103 in cluster 401, it should be noted that usually in VS/DR server cluster It include multiple RS103 in 401, Fig. 1 is provided with a RS103 for convenience of explanation therefore exemplarily only, unshowned RS's Structure and operation logic etc. are identical as the exemplary RS103 of Fig. 1.RS103 is the service for truly handling client request data Device, load dispatch server 102 then are used to determine the RS103 of processing request data, and will be from the received number of request of client 101 It is converted according to packet 301 and is sent to the RS103.
In the present invention, RS103 is operated in the form of virtual machine in the first host 203.In practice, at one first A RS can be run in host, i.e., multiple RS can be with isolated operation in multiple first hosts, can also be at one More than one RS is run in one host.
The mode of data communication for ease of description, the present embodiment when illustrating load dispatch server 102 also with virtual The form of machine operates in the second host 202, and client 101 is operated in the form of virtual machine in client host 201. In actual use, load dispatch server 102 is usually the server (not using the form of virtual machine) of an entity, but mould The form of quasi- virtual machine-host constructs communication data structure, consequently facilitating being communicated with RS103.In actual use, work as reality When body client sends request data package, it is empty simulation can be converted by virtual gateway by the request data package of entity client The form of quasi- client.It in the present embodiment, include source IP address, source in each data packet transmitted between each main body Mouthful, sourcesink host IP address, source hardware address, target ip address, target port, target host machine IP address, target hardware Location.Field in above data packet is stored by IP agreement, realizes each data by the setting of the above field with can be convenient Packet transmission positioning.For convenience of explanation, in the present embodiment, illustratively the IP address of each main body is provided that
The IP address of client 101 is 10.10.1.10, port 36000.Run the client host of client 101 201 IP address is 172.24.1.10.
The virtual ip address (vip) of load dispatch server 102 is 10.10.1.100, port 80.Run load dispatch The IP address of second host 202 of server 102 is 172.24.1.100.
The virtual ip address (vip) of RS103 is 10.10.1.100, port 8080.Run the first host of RS103 203 IP address is 172.24.1.20.
Wherein the vip setting of the vip and RS103 of load dispatch server 102 are consistent, this is in VS/DR mode, according to number It is arranged according to the needs of packet forwarding, otherwise RS103 can refuse to handle the data packet.
It needs to be configured manually by the operator of RS103 typically for the vip of RS103, the operation for RS103 For quotient, special setting is needed, it is troublesome in poeration.
In the present invention, QEMU guest agent (QGA) software is installed in the first host 203.QGA is a fortune Application program of the row in virtual machine internal, the mode interacted the purpose is to realize a kind of host and virtual machine.It is specific and Speech, the agency that QGA is installed in virtual machine, host pass through the generation of channel (i.e. unix socket) and virtual machine internal Reason is communicated, and such host just has a kind of means for being externally controlled/obtaining virtual machine.Such as: host can be to Virtual machine issues the instruction for executing modification hostname, or obtains the instruction of all progress informations in host.The QGA moment supervises This channel is listened, once discovery has instruction to send, analyzes the instruction, and execute, implementing result is returned to by channel, transmission It is json character string.
First host 203 carries out vip setting to RS103 therein is run by QGA.Specifically, the first host 203 call ip address add order to add vip in RS103 by QGA, so that vip be arranged in real time.
The vip that RS103 restarts rear RS103 in order to prevent loses, and the first host 203 modifies RS103 operation system by QGA Network card configuration file in system, adds vip in network card configuration file, so that vip is still successfully set after RS103 is restarted It sets.
Pass through above method, it is no longer necessary to the vip of the operator manual setting RS103 of RS103, and first can be passed through The setting of host 203 achievees the effect that configure RS103vip.
Fig. 2 is the flow diagram of the bright method of this law.
In conjunction with Fig. 1 and Fig. 2, client 101 is based on address resolution protocol (ARP) and broadcasts to VS/DR server cluster 401 Send request data package 301.
Such as step S101, load dispatch server 102 receives request data package 301.Specifically, load dispatch server 102 are equipped with a vip, and VS/DR server cluster 401 uses the vip of the load dispatch server 102 as unified external IP The destination address for the request data package 301 that address, i.e. client 101 issue is set as the vip, to realize client 101 to VS/ The access of DR server cluster 401.
In the example depicted in fig. 1, in the request data package 301 issued by client 101, source IP address and source port It is IP address 10.10.1.10 and the port 36000 of client 101.The sourcesink host IP address of request data package 301 with And source hardware address is the IP address 172.24.1.10 and hardware address (not shown) of client host 201.Number of request According to the target ip address of packet 301, target port, for the vip10.10.1.100 for loading dispatch server 102, load dispatch service The port 80 of device 102.The target host machine IP address and target hardware address of request data package 301 are the second host 202 IP address 172.24.1.100 and hardware address (not shown).
Load dispatch server 102 is according to the target ip address 10.10.1.100 and target port of request data package 301 80 receive request data package 301.
Since the needs of request data package 301 are received and processed by load dispatch server 102 first, for For RS103, when client 101 is based on ARP broadcast transmission request data package 301, need to broadcast ARP RS103 Response is shielded, and will receive the response that RS103 broadcasts ARP to guarantee client 101 not.
The present invention realizes the response of ARP broadcast of the shielding RS103 to request data package by the first host 203.
In the present invention, the first host 203 uses openflow agreement.Openflow agreement is Nick McKeown religion The core technology in the software defined network (Software DefinedNetworking, SDN) proposed in 2007 is awarded et al., SDN framework is mainly made of openflow interchanger and openflow controller, and interchanger is mainly by openflow flow table (FlowTable), exit passageway and OpenFlow agreement three parts are constituted.Wherein, OpenFlow flow table is used to carry out data packet Lookup and forwarding.
It is mounted with Open VSwitch software on first host 203, is equivalent to openflow interchanger to realize Function.The function of controller is realized by other remote servers in virtual network.It include openflow in first host 203 Flow table.Openflow flow table is made of a plurality of flow entry, and each flow entry is exactly a forward rule.Into the first host 203 data packet obtains the destination port of forwarding by inquiry flow table.Flow entry includes matching field, and matching field is for looking into Ask matching characteristic.Flow entry further includes instruction field, and instruction field is for executing corresponding operating.
The flow entry of instruction is answered in the openflow flow table of first host 203 comprising generation.First host 203, which receives, to be come Fisrt feature is extracted from the request data package 301 of client 101, and from request data package 301, by fisrt feature and a plurality of stream The matching field matching that list item carries out.When being matched when the matching field in the flow entry that fisrt feature answers instruction with generation, then execute In generation, answers the instruction field in the flow entry of instruction, returns to client for answer evidence.
Fisrt feature includes target ip address, and whether data packet includes ARP header, sourcesink host IP address.In generation, answers instruction Flow entry in matching field include load dispatch server 102 vip, ARP header, the address client host 201IP. That is, when the target ip address of fisrt feature is the vip for loading dispatch server 102, request data package 301 includes ARP Header, sourcesink host IP address are the address client host 201IP, judge that matching generation answers the flow entry of instruction, thereby executing In generation, answers the instruction field in the flow entry of instruction, returns to client 101 for answer evidence, and the sound that RS103 broadcasts ARP Information is answered to abandon.
Since the first host 203 is returned to client 101 for answer evidence, so that RS103 broadcasts response for ARP Client 101 can not be reached, that is to say, that shield the response of ARP broadcast of the RS103 to request data package 301.
Mode is answered using the generation of openflow flow table in the present embodiment, in other embodiments, other can also be passed through Mode is realized, such as the first host 203 abandons the address resolution for the request data package 301 that the client 101 received is sent Protocol information responds the ARP broadcast of request data package 301 to shield RS103.However, using castout request data The mode of packet 301 can make client 101 constantly send ARP and broadcast, and the generation of the openflow flow table used in the present embodiment The mode of answering will not there is a situation where such.
Then such as step S102, after load dispatch server 102 receives the request data package 301 from client 101, root The RS103 for handling client request data packet 301 is determined according to the loads-scheduling algorithm of setting.Loads-scheduling algorithm can be adopted With the loads-scheduling algorithm in existing VS/DR mode.
Such as step S103, the target port in request data package 301 is converted to RS103's by load dispatch server 102 Target hardware address is converted to the hardware address of the first host 203 by port, thus the request data package after being converted 302。
Load dispatch server and RS is required to need in the same network segment in common VS/DR mode, load dispatch clothes Target hardware address in request data package is converted to the hardware address of RS so that same with load dispatch server by business device RS in network segment can receive request data package.When load dispatch server and RS be not in a network segment, load dispatch Request data package can not be sent to RS by server.
In the present invention, it is operated in the form of virtual machine due to RS103 in the second host 203, for virtual The IP address of machine can be set as needed according to the IP address independently of the second host 203 for load dispatch server Possess the identical address vip.But there is also the feelings that the port RS103 may be inconsistent with the port of load dispatch server 102 Condition still can make load dispatch server 102 request data package can not be sent to RS103 due to port difference.
For above problem, load dispatch server 102 turns the target port in request data package 301 in this example It is changed to the port of RS103, target hardware address is converted into 203 hardware address of the first host, so that asking after conversion Ask data packet 302 that can be received by RS103.
Target port in request data package 301 is converted to the port of RS103 by load dispatch server 102, by target The method that directly modification data packet can be used in the method that hardware address is converted to 203 hardware address of the first host.
As shown in Figure 1, the second host 202 is to 301 turns of request data package after load dispatch server 102 is handled Request data package 302 after changing and being converted.Specifically, target port is converted to the end of RS103 by the second host 202 Target host machine IP address and target hardware address, are changed to the IP address 172.24.1.20 of the first host 203 by mouth 8080 And hardware address (not shown).
Then, such as step S104, the request data package 302 after conversion is sent to RS103 by load dispatch server 102.
RS103 is according to the target ip address 10.10.1.100 and target port 8080 of the request data package 302 after conversion Request data package 302 after receiving conversion.
RS103 carries out processing response to request data, and construct reply after receiving the request data package 302 after conversion Data packet 303.
Such as step S105, the first host 203 sends back complex data packet 303 to client 101.Fig. 3 is shown in Fig. 2 The detailed process of S105 step.
As shown in figure 3, the first host 203 receives the reply data packet 303 of RS103 building such as step S1051.
The first host 203 is described above and uses openflow agreement, such as step S1052, the first host 203 will reply the source address port translation in data packet 303 by openflow flow table as load 102 port of dispatch server.
Specifically, in openflow flow table include conversion port flow entry, the first host 203 from reply data packet Second feature is extracted in 303, and second feature is matched with the matching field of a plurality of flow entry.When second feature and generation answer finger When matching field in the flow entry of order matches, then the instruction field in the flow entry of conversion port is executed, data packet will be replied Source address port translation in 303 is load 102 port of dispatch server.
Second feature includes target ip address, source port.Matching field in the flow entry of conversion port includes client The address 101IP, the port RS103.That is, working as the target ip address of second feature for the address client 101IP, and source port When for the port RS103, then the flow entry for matching conversion port is judged, thereby executing the coding line in the flow entry of conversion port Section, will reply the source address port translation in data packet 303 is load 102 port of dispatch server.
It is issued due to replying data packet 303 by the first host 203, the source hardware address for replying data packet 303 is set It is set to the hardware address of the first host 203.
The source IP address and source port for replying data packet 303 are the vip10.10.1.100 for loading dispatch server 102 And the port 80 of load dispatch server 102.Sourcesink host IP address and source hardware address are the IP of the first host 203 The hardware address (not shown) of address 172.24.1.20 and the first host 203.Target ip address and target port are visitor 101 port 36000 the family end address 101IP 10.10.1.10 and client.
For client 101, due to reply data packet 303 source IP address and source port with client 101 The target ip address and target port of the request data package 301 of sending are consistent, therefore client 101 will receive the reply data Packet 303 is simultaneously associated.
By the introduction above to process and data flow of the invention it is known that the present invention passes through the first host 203 convert by openflow flow table to data packet 303 is replied, to conveniently realize in load dispatch server The method of data communication is realized in the case that the port 102 and RS103 is inconsistent.And by means of the present invention, it is only necessary to One host 203 is configured, without being configured to RS103.
In practice, the operation master of the subject of operation and RS103 of load dispatch server 102 and the first host 203 Body is usually inconsistent, and by means of the present invention, that the server load across port can be realized is equal without doing any variation by RS103 Weighing apparatus.
According to above method, the present invention also provides a kind of device of server load balancing, i.e. the first host 203, First host 203 uses openflow agreement.
First host 203 includes port translation module, and packet sending module and generation answer instruction module.
Wherein port translation module receives the reply data packet 303 of RS103, and will reply data by openflow flow table Source address port translation in packet 303 is to load the port of dispatch server 102.Specific conversion method is in above description method When it is stated that, details are not described herein again.
Packet sending module sends the reply data packet 303 after conversion to client 103.
In generation, answers instruction module and receives the request data package 301 from client 101, and generation answers instruction module and passes through openflow Flow table carries out for answering, to shield RS103 to request data package the ARP broadcast of the request data package 301 from client 101 The response of 301 ARP information broadcast.Specific generation answer instruction method in above description method it is stated that, details are not described herein again.
It should be noted that the present invention is a kind of application of computer load-balancing technique under virtual environment.In this hair During bright realization, the application of multiple software function modules can be related to.Such as reading over application documents, accurate understanding sheet After the realization principle and goal of the invention of invention, in the case where combining existing well-known technique, those skilled in the art completely may be used The present invention is realized with the software programming technical ability grasped with it.Aforementioned software functional module includes but is not limited to: data packet receives Module, openflow flow table obtain module etc., and category this scope that all the present patent application files refer to, applicant no longer arranges one by one It lifts.
Since the present invention is the improvement in traditional VS/DR mode, for the reality being not described in detail in the present invention Existing step, those skilled in the art can be used existing VS/DR mode and realize.About the host pair being not described in detail in the present invention In the control of virtual machine, existing host-virtual machine control mode is can be used in those skilled in the art.For being based on SDN frame Existing interactive mode can also be used in the openflow interchanger of structure and the interaction of openflow controller.
The above is only specific application examples of the invention, are not limited in any way to protection scope of the present invention.Except above-mentioned Outside embodiment, the present invention can also have other embodiment.All technical solutions formed using equivalent substitution or equivalent transformation, It falls within scope of the present invention.

Claims (16)

1. a kind of method of server load balancing, plurality of real server operate at least one in the form of virtual machine In first host, it is characterised in that the method includes the steps:
Load dispatch server receives the request data package from client;
The load dispatch server determines the real server of processing request data package according to load-balancing algorithm, wherein described The vip of load dispatch server is consistent with the vip of the real server;
Target port in request data package is converted to the port of real server by the load dispatch server, by number of request The first host hardware address is converted to according to the target hardware address in packet, without with changing the Target IP in request data package Location;
Request data package after conversion is sent to the real service of the processing request data package by the load dispatch server First host of device;
First host uses openflow agreement, and first host is sent at real server to the client Reply data packet after reason, first host will reply the source address port translation in data packet by openflow flow table For the load dispatch Service-Port, the source IP address in data packet is replied without change.
2. the method for server load balancing as described in claim 1, which is characterized in that
Openflow flow table is made of a plurality of flow entry, wherein include the flow entry of conversion port,
First host receives the reply data packet of real server, and the second spy is extracted from the reply data packet Sign, the second feature is matched with the matching field of a plurality of flow entry,
When matching when the matching field in the second feature and the flow entry of conversion port, then the flow entry of conversion port is executed In instruction field, by reply data packet in source port be converted to the load dispatch Service-Port.
3. the method for server load balancing as claimed in claim 2, which is characterized in that
The second feature includes target ip address, source port,
Matching field in the flow entry of the conversion port includes client ip address, real server port.
4. the method for server load balancing as described in claim 1, which is characterized in that
Request data package of the load dispatch server based on address resolution protocol broadcast reception from client,
First host is broadcasted by address resolution protocol of the openflow flow table to the request data package from client It carries out for answering, to shield the response of ARP information broadcast of the real server to request data package.
5. the method for server load balancing as claimed in claim 4, which is characterized in that
Openflow flow table is made of a plurality of flow entry, wherein the flow entry of instruction is answered comprising generation,
First host receives the request data package from client, and extracts first from the request data package Feature matches the fisrt feature with the matching field of a plurality of flow entry,
When being matched when the matching field in the flow entry that the fisrt feature answers instruction with generation, then execute for the flow entry for answering instruction In instruction field, to client return for answer evidence.
6. the method for server load balancing as claimed in claim 5, which is characterized in that
The fisrt feature includes target ip address, and whether data packet includes ARP header, sourcesink host IP address,
It includes load dispatch server virtual IP address, ARP header, client that the generation, which answers the matching field in the flow entry of instruction, Hold host IP address.
7. the method for server load balancing as described in claim 1, which is characterized in that
First host abandons the ARP information for the request data package that the client received is sent, to shield Real server is covered to respond the ARP information of request data package.
8. the method for server load balancing as described in claim 1, which is characterized in that further comprise the steps of:
First host installs QEMU guest agent, by QEMU guest agent to operation true clothes therein Business device carries out virtual ip address setting.
9. the method for server load balancing as claimed in claim 8, which is characterized in that
First host calls ip address add order addition described virtual by the QEMU guest agent IP address.
10. the method for server load balancing as claimed in claim 8 or 9, which is characterized in that
First host modifies the network card configuration file in operating system by the QEMU guest agent, described The virtual ip address is added in network card configuration file.
11. a kind of device of server load balancing, real server operate in the server load in the form of virtual machine In balanced device, which is characterized in that
The device of the server load balancing uses openflow agreement,
The device of the server load balancing includes:
Port translation module, the port translation module receive the reply data packet of real server, the port translation module It is load dispatch server port that the source address port translation in data packet, which will be replied, by openflow flow table, without changing back Source IP address in complex data packet;Packet sending module, the packet sending module will reply data packet and send out to client It send.
12. the device of server load balancing as claimed in claim 11, which is characterized in that
Openflow flow table is made of a plurality of flow entry, wherein include the flow entry of conversion port,
The port translation module extracts second feature from the reply data packet, by the second feature and a plurality of stream The matching field of list item is matched,
When matching when the matching field in the second feature and the flow entry of conversion port, then the flow entry of conversion port is executed In instruction field, by reply data packet in source port be converted to the load dispatch Service-Port.
13. the device of server load balancing as claimed in claim 12, which is characterized in that
The second feature includes target ip address, source port,
Matching field in the flow entry of the conversion port includes client ip address, real server port.
14. the device of server load balancing as claimed in claim 11, which is characterized in that
The device of the server load balancing further include:
In generation, answers instruction module, and the generation answers instruction module and receives the request data package from client, and the generation answers instruction module The address resolution protocol broadcast of the request data package from client is carried out for answering, so that shielding is true by openflow flow table Response of the real server to the ARP information broadcast of request data package.
15. the device of server load balancing as claimed in claim 14, which is characterized in that
Openflow flow table is made of a plurality of flow entry, wherein the flow entry of instruction is answered comprising generation,
In the generation, answers instruction module and extracts fisrt feature from the request data package, by the fisrt feature and a plurality of stream The matching field of list item is matched,
When being matched when the matching field in the flow entry that the fisrt feature answers instruction with generation, then execute for the flow entry for answering instruction In instruction field, to client return for answer evidence.
16. the device of server load balancing as claimed in claim 15, which is characterized in that
The fisrt feature includes target ip address, and whether data packet includes ARP header, sourcesink host IP address,
It includes load dispatch server virtual IP address, ARP header, client that the generation, which answers the matching field in the flow entry of instruction, Hold host IP address.
CN201610906359.4A 2016-10-18 2016-10-18 A kind of method and apparatus of server load balancing Active CN106302225B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610906359.4A CN106302225B (en) 2016-10-18 2016-10-18 A kind of method and apparatus of server load balancing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610906359.4A CN106302225B (en) 2016-10-18 2016-10-18 A kind of method and apparatus of server load balancing

Publications (2)

Publication Number Publication Date
CN106302225A CN106302225A (en) 2017-01-04
CN106302225B true CN106302225B (en) 2019-05-03

Family

ID=57719075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610906359.4A Active CN106302225B (en) 2016-10-18 2016-10-18 A kind of method and apparatus of server load balancing

Country Status (1)

Country Link
CN (1) CN106302225B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3995955A1 (en) * 2017-06-30 2022-05-11 Huawei Technologies Co., Ltd. Data processing method, network interface card, and server
CN107948088B (en) * 2018-01-05 2021-10-01 宝牧科技(天津)有限公司 Method for balancing network application layer load
CN108347472B (en) * 2018-01-12 2021-04-09 网宿科技股份有限公司 Configuration method of IP address, cloud server, cloud platform and readable storage medium
CN110620802B (en) * 2018-06-20 2021-04-09 华为技术有限公司 Load balancing method and device
CN109361602B (en) * 2018-11-12 2021-06-22 网宿科技股份有限公司 Method and system for forwarding message based on OpenStack cloud platform
CN110099115B (en) * 2019-04-30 2022-02-22 湖南麒麟信安科技股份有限公司 Load balancing method and system for transparent scheduling forwarding
WO2021051880A1 (en) * 2019-09-18 2021-03-25 平安科技(深圳)有限公司 Resource data acquisition method and apparatus, computer device and storage medium
CN111641724B (en) * 2020-06-04 2023-02-21 山东汇贸电子口岸有限公司 Application method of LVS load balancer in cloud
CN113067824B (en) * 2021-03-22 2023-04-07 平安科技(深圳)有限公司 Data scheduling method, system, virtual host and computer readable storage medium
CN113691460B (en) * 2021-08-26 2023-10-03 平安科技(深圳)有限公司 Data transmission method, device, equipment and storage medium based on load balancing
CN115967679A (en) * 2021-10-09 2023-04-14 华为技术有限公司 Data request method, communication device and communication system
CN116633934A (en) * 2022-02-10 2023-08-22 华为云计算技术有限公司 Load balancing method, device, node and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103346981A (en) * 2013-06-28 2013-10-09 华为技术有限公司 Virtual exchange method, related device and computer system
CN103780502A (en) * 2012-10-17 2014-05-07 阿里巴巴集团控股有限公司 System, method and device for data interaction under load balancing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780502A (en) * 2012-10-17 2014-05-07 阿里巴巴集团控股有限公司 System, method and device for data interaction under load balancing
CN103346981A (en) * 2013-06-28 2013-10-09 华为技术有限公司 Virtual exchange method, related device and computer system

Also Published As

Publication number Publication date
CN106302225A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106302225B (en) A kind of method and apparatus of server load balancing
US11895154B2 (en) Method and system for virtual machine aware policy management
CN103595648B (en) Method and system for balancing load at receiving side of server
US11765080B2 (en) Layer-2 networking span port in a virtualized cloud environment
EP3776230A1 (en) Virtual rdma switching for containerized applications
CN109547580A (en) A kind of method and apparatus handling data message
US11777897B2 (en) Cloud infrastructure resources for connecting a service provider private network to a customer private network
US11496599B1 (en) Efficient flow management utilizing control packets
US20230370421A1 (en) Scaling ip addresses in overlay networks
US20200351286A1 (en) Configuring an island virtual switch for provisioning of network security services
US11929976B2 (en) Virtual network routing gateway that supports address translation for dataplane as well as dynamic routing protocols (control plane)
US20220197683A1 (en) Mechanism to implement vcn network virtualization in a rack-based switch
CN108351798A (en) Expansible addressing mechanism for virtual machine
US20230396579A1 (en) Cloud infrastructure resources for connecting a service provider private network to a customer private network
CN107454132B (en) Method and device for supporting multi-tenant network transmission
US20230032441A1 (en) Efficient flow management utilizing unified logging
CN112073503A (en) High-performance load balancing method based on flow control mechanism
US20230013110A1 (en) Techniques for processing network flows
US20240054005A1 (en) Providing fault-resistance services in a dedicated region cloud at customer
US20240179115A1 (en) Virtual network routing gateway that supports address translation for dataplans as well as dynamic routing protocols (control plane)
US20230224223A1 (en) Publishing physical topology network locality for general workloads
US20230222007A1 (en) Publishing physical topology network locality information for graphical processing unit workloads
WO2023136965A1 (en) Publishing physical topology network locality for general workloads
WO2024039521A1 (en) Providing fault-resistance services in a dedicated region cloud at customer
WO2024039520A1 (en) Dual top-of-rack switch implementation for dedicated region cloud at customer

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200433 Room 201, 10 B, 619 Longchang Road, Yangpu District, Shanghai.

Applicant after: Excellent Polytron Technologies Inc

Address before: 200433 room 1207-10, 6 Wade Road, Yangpu District, Shanghai.

Applicant before: SHANGHAI UCLOUD INFORMATION TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant