Summary of the invention
Main purpose of the present invention is to provide a kind of server load balancing method and apparatus, to solve when a set of load-balanced server delays machine, and the problem of web page server cisco unity malfunction corresponding with it.
To achieve these goals, according to an aspect of the present invention, a kind of server load balancing method is provided.Server load balancing method according to the present invention comprises: server load balancing method is used for the load balancing of multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, secondary route entry is the path sending information between each web page server and the second load-balanced server, whether normally detect the first route entry communication, if detect that the first route entry communication is normal, send information to the first load-balanced server by the first route entry, if and detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Further, detect the first route entry communication whether normally to comprise: send Detection Information; Whether detect receives via the first route entry feedack in Preset Time; If received via the first route entry feedack, detect that the first route entry communication is normal; And if do not receive via the first route entry feedack, detect the first route entry communication failure.
Further, web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, the method also comprises: open the first port and close the second port simultaneously.
Further, send information to the first load-balanced server by the first route entry to comprise: detect the first port and whether open; And when detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
Further, web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that obstacle appears in the first route entry communication, before passing through secondary route entry forwarding information to the second load-balanced server, the method also comprises: close the first port and open the second port simultaneously.
Further, first route entry has the first default path first priority and the first path second priority, first path first priority represents that execution web page server sends the priority of message to the first load-balanced server via the first route entry, secondary route entry has the second path first priority, second path first priority represents the priority sending message via secondary route entry to the second load-balanced server, second path first priority is lower than the first path first priority, if detect the first route entry communication obstacle, send information to the second load-balanced server by secondary route entry to comprise: the first path first priority is changed to the first path second priority, wherein, first path second priority is lower than the second path first priority, judge whether secondary route entry is the route entry that multiple route entry medium priority is the highest, and when judging that secondary route entry is the highest route entry of multiple route entry medium priority, the second load-balanced server receives the information that secondary route entry forwards.
To achieve these goals, according to a further aspect in the invention, a kind of server load balancing device is provided.Server load balancing device is used for the load balancing of multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, secondary route entry is the path sending information between each web page server and the second load-balanced server, server load balancing device according to the present invention comprises: detecting unit, whether normal for detecting the first route entry communication, first transmitting element, for detecting in the normal situation of the first route entry communication, sends information to the first load-balanced server by the first route entry, and second transmitting element, for when detecting the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Further, detecting unit comprises: the second sending module, for sending Detection Information; Whether first detection module, receive via the first route entry feedack for detecting in Preset Time; Second detection module, for when receiving via the first route entry feedack, detects that the first route entry communication is normal; And the 3rd detection module, if for when not receiving via the first route entry feedack, detect the first route entry communication failure.
Further, web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, this device also comprises: configuration module, close the second port for opening the first port simultaneously.
Whether further, the first transmitting element comprises: the 4th detection module, open for detecting the first port; And sending module, for when detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
By the present invention, adopt the method comprised the following steps: whether normally detect the first route entry communication; If detect that the first route entry communication is normal, send information to the first load-balanced server by the first route entry; If detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.Whether normal by detecting the first route entry communication in the present invention, determine by the normal route entry of communication, information in web page server is sent to corresponding load-balanced server via the normal route entry of communication, solve when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it.
Embodiment
It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.Below with reference to the accompanying drawings and describe the present invention in detail in conjunction with the embodiments.
The application's scheme is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present application, technical scheme in the embodiment of the present application is clearly and completely described, obviously, described embodiment is only the embodiment of the application's part, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all should belong to the scope of the application's protection.
It should be noted that, term " first ", " second " etc. in the specification of the application and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged, in the appropriate case so that the embodiment of the application described herein.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
Fig. 1 is the flow chart according to server load balancing method of the present invention.As shown in Figure 1, the method comprises following step S101 to step S105:
Whether normal step S101, detect the first route entry communication.
Web page server is multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, and secondary route entry is the path sending information between each web page server and the second load-balanced server.
Detect the first web page server whether normal to the first route entry communication of the first load-balanced server transmission information.
Particularly, the first web page server sends Detection Information via the first route entry to the first load-balanced server; Whether detect the first web page server in Preset Time receives via the first route entry feedack; If received via the first route entry feedack, detect that the first route entry communication is normal; If do not received via the first route entry feedack, detect the first route entry communication failure.
Step S102, if detect that the first route entry communication is normal, sends information to the first load-balanced server by the first route entry.
Web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, open the first port and close the second port simultaneously.
Such as, web page server configures different default routes: route A, give tacit consent to and all-network data are mail to lvs load-balanced server A; Route B, gives tacit consent to and all-network data is mail to lvs load-balanced server B; The Metrec value configuring route A on web page server be 1, route B Metrec value be 500, give tacit consent to all packets and all can be forwarded by route A.
It should be noted that namely lvs load-balanced server A is equivalent to above-mentioned first load-balanced server, lvs load-balanced server B, is namely equivalent to above-mentioned second load-balanced server.
Open the detected module of http agreement, on web page server, create detected port 9001 and 9002 by http agreement, and a static resource http is provided: // web page server 1/heartbeat/heartbert.gif is used for being detected.Lvs load-balanced server A detects 9001 ports; Lvs load-balanced server B detects 9002 ports.Particularly, for lvs load-balanced server A detection port 9001, access path (or being called root) is formed by the port created by http agreement above web page server and corresponding IP address, lvs load-balanced server according to this access path to static resource http: // web page server 1/heartbeat/heartbert.gif conducts interviews, if access static resource http: // web page server 1/heartbeat/heartbert.gif success, then illustrate that port 9001 is opened, if access static resource http: // web page server 1/heartbeat/heartbert.gif failure, then illustrate that port 9001 is closed.
Send information to the first load-balanced server by the first route entry to comprise: detect the first port and whether open; When detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
As above-mentioned example, detect route A by traceroute mode, if route A testing result is can normal communication, the metrec value of so current route A is constant, when 9001 ports are opened, web page server sends information via route A to lvs load-balanced server A.
Step S103, if detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that obstacle appears in the first route entry communication, before passing through secondary route entry forwarding information to the second load-balanced server, close the first port and open the second port simultaneously.First route entry has the first default path first priority and the first path second priority, first path first priority represents that execution web page server sends the priority of message to the first load-balanced server via the first route entry, secondary route entry has the second path first priority, second path first priority represents the priority sending message via secondary route entry to the second load-balanced server, second path first priority is lower than the first path first priority, if detect the first route entry communication obstacle, send information to the second load-balanced server by secondary route entry to comprise: the first path first priority is changed to the first path second priority, wherein, first path second priority is lower than the second path first priority, judge whether secondary route entry is the route entry that multiple route entry medium priority is the highest, when judging that secondary route entry is the highest route entry of multiple route entry medium priority, the second load-balanced server receives the information that secondary route entry forwards.
Such as, web page server configures different default routes: route A, give tacit consent to and all-network data are mail to lvs load-balanced server A; Route B, gives tacit consent to and all-network data is mail to lvs load-balanced server B; It should be noted that namely lvs load-balanced server A is equivalent to above-mentioned first load-balanced server, lvs load-balanced server B, is namely equivalent to above-mentioned second load-balanced server.The Metrec value configuring route A on web page server be 1, route B Metrec value be 500, give tacit consent to all packets and all can be forwarded by route A.It should be noted that, namely the Metrec value 1 of route A is equivalent to the first path first priority, and namely the Metrec value 500 of route B is equivalent to the second path first priority.
Route A is detected by traceroute mode, if route A testing result is can not normal communication, so the metrec value of route A changes to 1000, namely the Metrec value 1000 of route A is equivalent to the first path second priority, again because the metrec value 1000 of route A is greater than the metrec value 500 of route B, namely the first path second priority is lower than the second path first priority.So all packet informations all can have route B to be forwarded to lvs load-balanced server B.
Fig. 2 is client-access web page server schematic diagram.As shown in Figure 2, when client computer 200 accesses domain name A, need the host server IP obtaining domain name A binding.Flow process is as follows: first, and client computer 200 sends request instruction to recursion server 100 (that is: local bandwidth operator server), and recursion server 100 request is to resolution server; Then, all poll host server IP that domain name arranges by resolution server return to recursion server 100, these IP are returned to client computer 200 by recursion server 100 again, and finally, the browser of client computer 200 can to conduct interviews web page server by one of them IP of random access.
Under lvs load balancing nat pattern, lvs load-balanced server 300 request instruction can only be transmitted to web page server 400 or request instruction can only be transmitted to web page server 402 or 403 by 401, lvs load-balanced server 301.Namely, web page server can only a corresponding lvs load-balanced server, and a lvs load-balanced server can corresponding multiple stage web page server.By detecting route entry on web page server in the present invention, whether communication is normal, in the normal situation of communication, web page server sends information via this route entry to the loaded server that this route entry is corresponding, when detecting this route entry communication obstacle, web page server sends information via another route entry to the load-balanced server that this another route entry is corresponding, achieve lvs load-balanced server 300 or 301, when wherein a load-balanced server goes wrong, web page server 400,401,402 and 403 can normally work.
Whether the server load balancing method that the embodiment of the present invention provides is normal by detecting the first route entry communication; If detect that the first route entry communication is normal, send information to the first load-balanced server by the first route entry; If detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.By the present invention, solve when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it.Achieve when arbitrary cover load-balanced server goes wrong, web page server corresponding with it can normally work.
It should be noted that, can perform in the computer system of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing, and, although show logical order in flow charts, but in some cases, can be different from the step shown or described by order execution herein.
The embodiment of the present invention additionally provides a kind of server load balancing device, it should be noted that, the server load balancing device of the embodiment of the present invention may be used for performing that the embodiment of the present invention provides for server load balancing method.Below the server load balancing device that the embodiment of the present invention provides is introduced.
Fig. 3 is the schematic diagram according to server load balancing device of the present invention.As shown in Figure 3, this device comprises: detecting unit 10, first transmitting element 20 and the second transmitting element 30.
Wherein, server load balancing device is used for the load balancing of multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, and secondary route entry is the path sending information between each web page server and the second load-balanced server.
Whether detecting unit 10 is normal for detecting the first route entry communication.
Preferably, in the server load balancing device that the embodiment of the present invention provides, detecting unit comprises: the second sending module, for sending Detection Information; Whether first detection module, receive via the first route entry feedack for detecting in Preset Time; Second detection module, for when receiving via the first route entry feedack, detects that the first route entry communication is normal; And the 3rd detection module, if for when not receiving via the first route entry feedack, detect the first route entry communication failure.
First transmitting element 20, for detecting in the normal situation of the first route entry communication, sends information to the first load-balanced server by the first route entry.
Web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, device also comprises: configuration module, close the second port for opening the first port simultaneously.Whether the first transmitting element comprises: the 4th detection module, open for detecting the first port; Sending module, for when detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
Second transmitting element 30, for when detecting the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Whether normal the server load balancing device that the embodiment of the present invention provides, detect the first route entry communication by detecting unit 10; First transmitting element 20, detecting in the normal situation of the first route entry communication, sends information to the first load-balanced server by the first route entry; Second transmitting element 30 when detecting the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.By the present invention, solve when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it.Achieve when arbitrary load-balanced server goes wrong, web page server corresponding with it can normally work.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.