The content of the invention
It is a primary object of the present invention to provide a kind of server load balancing method and apparatus, to solve to work as a set of load
Equalization server delay machine when, the problem of corresponding web page server cisco unity malfunction.
To achieve these goals, according to an aspect of the invention, there is provided a kind of server load balancing method.Root
Server load balancing method according to the present invention includes:The load that server load balancing method is used for multiple web page servers is equal
Weighing apparatus, is provided with multiple route entrys, load-balanced server includes between each web page server and load-balanced server
First load-balanced server and the second load-balanced server, multiple route entrys include the first route entry and secondary route
Entry, the first route entry send the path of information between each web page server and the first load-balanced server, and second
Route entry sends the path of information between each web page server and the second load-balanced server;The first via is detected by bar
Whether mesh communication is normal;If detecting that the communication of the first route entry is normal, first is sent information to by the first route entry
Load-balanced server;And if detect that the first route entry communicates obstacle, by secondary route entry forwarding information extremely
Second load-balanced server.
Further, detect whether the communication of the first route entry normally includes:Send detection information;Detection is in preset time
Inside whether receive via the first route entry feedack;If received via the first route entry feedack,
Detect that the communication of the first route entry is normal;And if do not received via the first route entry feedack, detect
One route entry communication failure.
Further, first port and second port are provided with web page server, wherein, open first port and represent
Allow web page server to send information to the first load-balanced server via the first route entry, close first port and represent not
Allow web page server to send information to the first load-balanced server via the first route entry, open second port and represent to permit
Perhaps web page server sends information via secondary route entry to the second load-balanced server, closes second port and represents not permit
Perhaps web page server sends information via secondary route entry to the second load-balanced server, is detecting the first route entry
After communication is normal, before the first load-balanced server is sent information to by the first route entry, this method also includes:
Open first port and simultaneously close off second port.
Further, sending information to the first load-balanced server by the first route entry includes:Detect first end
Whether mouth is opened;And in the case where detecting first port to open, pass through the first route entry forwarding information to first
Load-balanced server.
Further, first port and second port are provided with web page server, wherein, open first port and represent
Allow web page server to send information to the first load-balanced server via the first route entry, close first port and represent not
Allow web page server to send information to the first load-balanced server via the first route entry, open second port and represent to permit
Perhaps web page server sends information via secondary route entry to the second load-balanced server, closes second port and represents not permit
Perhaps web page server sends information via secondary route entry to the second load-balanced server, is detecting the first route entry
After obstacle occurs in communication, before by secondary route entry forwarding information to the second load-balanced server, this method is also
Including:Close first port and open second port simultaneously.
Further, the first route entry has the default priority of first path first and first path second preferential
Level, the priority of first path first represent that perform web page server sends out via the first route entry to the first load-balanced server
Send the priority of message, secondary route entry has second the first priority of path, second the first priority of path represent via
Secondary route entry sends the priority of message to the second load-balanced server, and second the first priority of path is less than the first via
The priority of footpath first, if detecting that the first route entry communicates obstacle, it is negative to send information to second by secondary route entry
Carrying equalization server includes:The priority of first path first is changed to the priority of first path second, wherein, first path
Two priority are less than second the first priority of path;Judge whether secondary route entry is highest priority in multiple route entrys
Route entry;And judging situation of the secondary route entry for the route entry of highest priority in multiple route entrys
Under, the second load-balanced server receives the information of secondary route entry forwarding.
To achieve these goals, according to another aspect of the present invention, there is provided a kind of server load balancing device.Clothes
Business device load balancing apparatus is used for the load balancing of multiple web page servers, in each web page server and load-balanced server
Between be provided with multiple route entrys, load-balanced server includes the first load-balanced server and the second load balancing service
Device, multiple route entrys include the first route entry and secondary route entry, the first route entry be each web page server and
The path of information is sent between first load-balanced server, secondary route entry is that each web page server and the second load are equal
The path of information is sent between weighing apparatus server, is included according to the server load balancing device of the present invention:Detection unit, for examining
Whether normal survey the communication of the first route entry;First transmitting element, for detecting the normal feelings of the first route entry communication
Under condition, the first load-balanced server is sent information to by the first route entry;And second transmitting element, for detecting
In the case of going out the first route entry communication obstacle, pass through secondary route entry forwarding information to the second load-balanced server.
Further, detection unit includes:Second sending module, for sending detection information;First detection module, it is used for
Whether detection is received in preset time via the first route entry feedack;Second detection module, for receiving
In the case of via the first route entry feedack, detect that the communication of the first route entry is normal;And the 3rd detection
Module, if in the case where not receiving via the first route entry feedack, detecting that the first route entry is led to
Interrogate failure.
Further, first port and second port are provided with web page server, wherein, open first port and represent
Allow web page server to send information to the first load-balanced server via the first route entry, close first port and represent not
Allow web page server to send information to the first load-balanced server via the first route entry, open second port and represent to permit
Perhaps web page server sends information via secondary route entry to the second load-balanced server, closes second port and represents not permit
Perhaps web page server sends information via secondary route entry to the second load-balanced server, is detecting the first route entry
After communication is normal, before the first load-balanced server is sent information to by the first route entry, the device also includes:
Configuration module, second port is simultaneously closed off for opening first port.
Further, the first transmitting element includes:4th detection module, for detecting whether first port opens;And
Sending module, in the case where detecting first port to open, by the first route entry forwarding information to first negative
Carry equalization server.
By the present invention, using the method comprised the following steps:Whether normal detect the communication of the first route entry;If inspection
It is normal to measure the communication of the first route entry, the first load-balanced server is sent information to by the first route entry;If inspection
The first route entry communication obstacle is measured, passes through secondary route entry forwarding information to the second load-balanced server.Pass through this
It is whether normal that the communication of the first route entry is detected in invention, is determined by communicating normal route entry, by web page server
Middle information is sent to corresponding load-balanced server via normal route entry is communicated, and is solved when a set of load balancing takes
Business device delay machine when, the problem of corresponding web page server cisco unity malfunction.
Embodiment
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the present invention in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
In order that those skilled in the art more fully understand application scheme, below in conjunction with the embodiment of the present application
Accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application part, rather than whole embodiments.Based on the embodiment in the application, ordinary skill people
The every other embodiment that member is obtained under the premise of creative work is not made, it should all belong to the model of the application protection
Enclose.
It should be noted that term " first " in the description and claims of this application and above-mentioned accompanying drawing, "
Two " etc. be for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so use
Data can exchange in the appropriate case, so as to embodiments herein described herein.In addition, term " comprising " and " tool
Have " and their any deformation, it is intended that cover it is non-exclusive include, for example, containing series of steps or unit
Process, method, system, product or equipment are not necessarily limited to those steps clearly listed or unit, but may include without clear
It is listing to Chu or for the intrinsic other steps of these processes, method, product or equipment or unit.
Fig. 1 is the flow chart according to the server load balancing method of the present invention.As shown in figure 1, this method is including as follows
Step S101 to step S105:
Whether step S101, detection the first route entry communication are normal.
Web page server is multiple web page servers, is provided between each web page server and load-balanced server
Multiple route entrys, load-balanced server include the first load-balanced server and the second load-balanced server, Duo Gelu
First route entry and secondary route entry are included by entry, the first route entry is that each web page server and the first load are equal
Weigh and the path of information sent between server, secondary route entry be each web page server and the second load-balanced server it
Between send information path.
Whether detect the first route entry communication of first web page server to the first load-balanced server transmission information
Normally.
Specifically, the first web page server sends detection letter via the first route entry to the first load-balanced server
Breath;Whether detection first web page server in preset time is received via the first route entry feedack;If connect
Receive via the first route entry feedack, detect that the communication of the first route entry is normal;If do not receive via first
Route entry feedack, detect the first route entry communication failure.
Step S102, if detecting that the communication of the first route entry is normal, the is sent information to by the first route entry
One load-balanced server.
First port and second port are provided with web page server, wherein, open first port and represent to allow webpage
Server sends information via the first route entry to the first load-balanced server, closes first port and represents not allow webpage
Server sends information via the first route entry to the first load-balanced server, opens second port and represents to allow webpage to take
Business device sends information via secondary route entry to the second load-balanced server, closes second port and represents not allow webpage to take
Business device sends information via secondary route entry to the second load-balanced server, is detecting that the communication of the first route entry is normal
Afterwards, before the first load-balanced server is sent information to by the first route entry, open first port and simultaneously close off
Second port.
For example, different default routes is configured on web page server:A is route, gives tacit consent to and all-network data is sent to lvs
Load-balanced server A;B is route, gives tacit consent to and all-network data is sent to lvs load-balanced servers B;On web page server
Face configuration route A Metrec values are 1, and the Metrec values for routeing B are 500, and giving tacit consent to all packets all can be by routeing A forwardings.
It should be noted that lvs load-balanced server A, that is, above-mentioned first load-balanced server, lvs are born
Equalization server B is carried, that is, above-mentioned second load-balanced server.
The detected module of http agreements is opened, is created in web page server above with http agreements and is detected port
9001 and 9002, and a static resource http is provided:// web page server 1/heartbeat/heartbert.gif is used for quilt
Detection.Lvs load-balanced servers A detects 9001 ports;Lvs load-balanced servers B detects 9002 ports.Specifically, with
Exemplified by lvs load-balanced servers A detections port 9001, the port created by web page server above with http agreements and phase
The IP address composition access path (or being root) answered, lvs load-balanced servers provide according to the access path to static state
Source http:// web page server 1/heartbeat/heartbert.gif conducts interviews, if accessing static resource http://
Web page server 1/heartbeat/heartbert.gif successes, then illustrate that port 9001 has been switched on, if accessing static money
Source http:// web page server 1/heartbeat/heartbert.gif fails, then illustrates that port 9001 is closed.
Sending information to the first load-balanced server by the first route entry includes:Whether detection first port is opened
Open;In the case where detecting first port to open, pass through the first route entry forwarding information to the first load balancing service
Device.
Such as above-mentioned example, route A is detected by traceroute modes, if route A testing results are can positive normal open
News, then current route A metrec values are constant, in the case that 9001 ports have been switched on, web page server via route A to
Lvs load-balanced servers A sends information.
Step S103, if detecting that the first route entry communicates obstacle, pass through secondary route entry forwarding information to the
Two load-balanced servers.
First port and second port are provided with web page server, wherein, open first port and represent to allow webpage
Server sends information via the first route entry to the first load-balanced server, closes first port and represents not allow webpage
Server sends information via the first route entry to the first load-balanced server, opens second port and represents to allow webpage to take
Business device sends information via secondary route entry to the second load-balanced server, closes second port and represents not allow webpage to take
Business device sends information via secondary route entry to the second load-balanced server, is detecting the communication appearance of the first route entry
After obstacle, before by secondary route entry forwarding information to the second load-balanced server, first port is closed simultaneously
Open second port.First route entry has the default priority of first path first and first path second priority, the
One the first priority of path represents to perform web page server and send to the first load-balanced server via the first route entry to disappear
The priority of breath, secondary route entry have second the first priority of path, and second the first priority of path is represented via second
Route entry sends the priority of message to the second load-balanced server, and second the first priority of path is less than first path the
One priority, if detecting that the first route entry communicates obstacle, it is equal that the second load is sent information to by secondary route entry
Weighing apparatus server includes:The priority of first path first is changed to the priority of first path second, wherein, first path second is excellent
First level is less than second the first priority of path;Judge secondary route entry whether be highest priority in multiple route entrys road
By entry;In the case where judging route entry of the secondary route entry for highest priority in multiple route entrys, second
Load-balanced server receives the information of secondary route entry forwarding.
For example, different default routes is configured on web page server:A is route, gives tacit consent to and all-network data is sent to lvs
Load-balanced server A;B is route, gives tacit consent to and all-network data is sent to lvs load-balanced servers B;It should be noted that
Lvs load-balanced server A, that is, above-mentioned first load-balanced server, lvs load-balanced server B, that is,
Above-mentioned second load-balanced server.Configuration route A Metrec values are 1 on web page server, route B Metrec values
For 500, giving tacit consent to all packets all can be by routeing A forwardings.It should be noted that route A Metrec values 1 are that is, first
The priority of path first, B Metrec values 500 are route that is, second the first priority of path.
Route A is detected by traceroute modes, if route A testing results are to be unable to normal communication, then route A
Metrec values be changed to 1000, that is, route A Metrec values 1000 equivalent to the priority of first path second, and due to routeing
A metrec values 1000 are more than route B metrec values 500, i.e. the priority of first path second is excellent less than the second path first
First level.So all packet informations can all have route B to be forwarded to lvs load-balanced servers B.
Fig. 2 is that client accesses web page server schematic diagram.As shown in Fig. 2, it is necessary to obtain when client computer 200 accesses domain name A
Obtain the host server IP of domain name A bindings.Flow is as follows:First, client computer 200 sends request instruction to recursion server 100
(i.e.:Local bandwidth operator servers), the hair request of recursion server 100 is to resolution server;Then, resolution server is by domain
All poll host server IP that name is set return to recursion server 100, recursion server 100 by these IP return again to
Client computer 200, finally, the browser of client computer 200 can the one of IP of random access conduct interviews web page server.
Under lvs load balancing nat patterns, request instruction can only be transmitted to webpage clothes by lvs load-balanced servers 300
Request instruction can only be transmitted to web page server 402 or 403 by business device 400 or 401, lvs load-balanced server 301.
It is, a web page server can only correspond to a lvs load-balanced server, and a lvs load-balanced server can
With corresponding more web page servers.By detecting whether route entry on web page server communicates normally in the present invention, communicating
Web page server sends information via loaded server corresponding to the route entry to the route entry in the case of normal, is examining
When measuring route entry communication obstacle, web page server is via corresponding to another route entry to another route entry
Load-balanced server sends information, lvs load-balanced servers 300 or 301 is realized, wherein a load-balanced server
When going wrong, web page server 400,401,402 and 403 can normal work.
Server load balancing method provided in an embodiment of the present invention, by whether just to detect the communication of the first route entry
Often;If detecting that the communication of the first route entry is normal, the first load balancing service is sent information to by the first route entry
Device;If detecting that the first route entry communicates obstacle, pass through secondary route entry forwarding information to the second load balancing service
Device.By the present invention, solve when a set of load-balanced server delays machine, corresponding web page server can not normal work
The problem of making.Realize when any set load-balanced server goes wrong, corresponding web page server can be normal
Work.
It should be noted that can be in such as one group of computer executable instructions the flow of accompanying drawing illustrates the step of
Performed in computer system, although also, show logical order in flow charts, in some cases, can be with not
The order being same as herein performs shown or described step.
The embodiment of the present invention additionally provides a kind of server load balancing device, it is necessary to explanation, the embodiment of the present invention
Server load balancing device can be used for perform the embodiment of the present invention provided be used for server load balancing method.With
Under server load balancing device provided in an embodiment of the present invention is introduced.
Fig. 3 is the schematic diagram according to the server load balancing device of the present invention.As shown in figure 3, the device includes:Detection
Unit 10, the first transmitting element 20 and the second transmitting element 30.
Wherein, server load balancing device is used for the load balancing of multiple web page servers, in each web page server
Be provided with multiple route entrys between load-balanced server, load-balanced server include the first load-balanced server and
Second load-balanced server, multiple route entrys include the first route entry and secondary route entry, and the first route entry is
The path of information is each sent between web page server and the first load-balanced server, secondary route entry takes for each webpage
The path of information is sent between business device and the second load-balanced server.
Whether detection unit 10 is normal for detecting the communication of the first route entry.
Preferably, in server load balancing device provided in an embodiment of the present invention, detection unit includes:Second sends
Module, for sending detection information;First detection module, for detect whether receive in preset time via the first via by
Entry feedack;Second detection module, in the case where receiving via the first route entry feedack, examining
It is normal to measure the communication of the first route entry;And the 3rd detection module, if for do not receive it is anti-via the first route entry
In the case of the information of feedback, the first route entry communication failure is detected.
First transmitting element 20, in the case of detecting that the communication of the first route entry is normal, by the first via by
Entry sends information to the first load-balanced server.
First port and second port are provided with web page server, wherein, open first port and represent to allow webpage
Server sends information via the first route entry to the first load-balanced server, closes first port and represents not allow webpage
Server sends information via the first route entry to the first load-balanced server, opens second port and represents to allow webpage to take
Business device sends information via secondary route entry to the second load-balanced server, closes second port and represents not allow webpage to take
Business device sends information via secondary route entry to the second load-balanced server, is detecting that the communication of the first route entry is normal
Afterwards, before the first load-balanced server is sent information to by the first route entry, device also includes:Configuration module,
Second port is simultaneously closed off for opening first port.First transmitting element includes:4th detection module, for detecting first end
Whether mouth is opened;Sending module, in the case where detecting first port to open, being forwarded and being believed by the first route entry
Cease to the first load-balanced server.
Second transmitting element 30, in the case where detecting the first route entry communication obstacle, passing through secondary route
Entry forwarding information is to the second load-balanced server.
Server load balancing device provided in an embodiment of the present invention, the first route entry is detected by detection unit 10 and led to
Whether news are normal;First transmitting element 20 is in the case of detecting that the communication of the first route entry is normal, by the first via by bar
Mesh sends information to the first load-balanced server;Second transmitting element 30 is detecting the feelings of the first route entry communication obstacle
Under condition, pass through secondary route entry forwarding information to the second load-balanced server.By the present invention, solve when a set of load
Equalization server delay machine when, the problem of corresponding web page server cisco unity malfunction.Realize and work as any load balancing
When server goes wrong, corresponding web page server can normal work.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step can be with general
Computing device realize that they can be concentrated on single computing device, or be distributed in multiple computing devices and formed
Network on, alternatively, they can be realized with the program code that computing device can perform, it is thus possible to they are stored
Performed in the storage device by computing device, either they are fabricated to respectively each integrated circuit modules or by they
In multiple modules or step be fabricated to single integrated circuit module to realize.So, the present invention is not restricted to any specific
Hardware and software combines.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area
For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies
Change, equivalent substitution, improvement etc., should be included in the scope of the protection.