CN104410677A - Server load balancing method and device - Google Patents

Server load balancing method and device Download PDF

Info

Publication number
CN104410677A
CN104410677A CN201410659288.3A CN201410659288A CN104410677A CN 104410677 A CN104410677 A CN 104410677A CN 201410659288 A CN201410659288 A CN 201410659288A CN 104410677 A CN104410677 A CN 104410677A
Authority
CN
China
Prior art keywords
route entry
load
server
web page
balanced server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410659288.3A
Other languages
Chinese (zh)
Other versions
CN104410677B (en
Inventor
刘凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gridsum Technology Co Ltd
Original Assignee
Beijing Gridsum Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gridsum Technology Co Ltd filed Critical Beijing Gridsum Technology Co Ltd
Priority to CN201410659288.3A priority Critical patent/CN104410677B/en
Publication of CN104410677A publication Critical patent/CN104410677A/en
Application granted granted Critical
Publication of CN104410677B publication Critical patent/CN104410677B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1038Load balancing arrangements to avoid a single path through a load balancer

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a server load balancing method and device. The method comprises the following steps: detecting whether communication of a first route item is normal or not; if the communication of the first route item is detected to be normal, transmitting information to a first load balancing server through the first route item; and if communication barrier of the first route item is detected, forwarding information to a second load balancing server through a second route item. Through adoption of the method and the device, the problem of failure in normal work of a webpage server corresponding to a set of load balancing server when the load balancing server crashes is solved.

Description

Server load balancing method and apparatus
Technical field
The present invention relates to internet arena, in particular to a kind of server load balancing method and apparatus.
Background technology
When client-access domain name A, need the host server IP first obtaining domain name A binding.First, client sends out request to recursion server (that is: local bandwidth operator server), recursion server sends out request to resolution server, all poll host server IP that domain name arranges by resolution server return to recursion server, and these IP are returned to client by recursion server again.The browser of client can conduct interviews to one of them IP at random.In prior art, at linux virtual server, (linux virtual server is called for short: lvs) load balancing network address translation (NetworkAddress Translation, be called for short: nat) under pattern, the request forward that client can only send to web page server 1 or 2, lvs load-balanced server B by the request forward that client can only send by lvs load-balanced server A is to web page server 3 or 4.Namely web page server can only a corresponding lvs server load balancing server, and a lvs load-balanced server can corresponding multiple stage web page server.Therefore in this case, when have a set of load balancing delay machine time, only have the work of half back-end host, second half then can not work.Such as, when fault has appearred in lvs load-balanced server A, the request that so all clients send can be transmitted to web page server 3 or 4 by lvs load balancing B, and web page server 1 and 2 then can not work.
For of the prior art when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it, not yet proposes effective solution at present.
Summary of the invention
Main purpose of the present invention is to provide a kind of server load balancing method and apparatus, to solve when a set of load-balanced server delays machine, and the problem of web page server cisco unity malfunction corresponding with it.
To achieve these goals, according to an aspect of the present invention, a kind of server load balancing method is provided.Server load balancing method according to the present invention comprises: server load balancing method is used for the load balancing of multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, secondary route entry is the path sending information between each web page server and the second load-balanced server, whether normally detect the first route entry communication, if detect that the first route entry communication is normal, send information to the first load-balanced server by the first route entry, if and detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Further, detect the first route entry communication whether normally to comprise: send Detection Information; Whether detect receives via the first route entry feedack in Preset Time; If received via the first route entry feedack, detect that the first route entry communication is normal; And if do not receive via the first route entry feedack, detect the first route entry communication failure.
Further, web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, the method also comprises: open the first port and close the second port simultaneously.
Further, send information to the first load-balanced server by the first route entry to comprise: detect the first port and whether open; And when detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
Further, web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that obstacle appears in the first route entry communication, before passing through secondary route entry forwarding information to the second load-balanced server, the method also comprises: close the first port and open the second port simultaneously.
Further, first route entry has the first default path first priority and the first path second priority, first path first priority represents that execution web page server sends the priority of message to the first load-balanced server via the first route entry, secondary route entry has the second path first priority, second path first priority represents the priority sending message via secondary route entry to the second load-balanced server, second path first priority is lower than the first path first priority, if detect the first route entry communication obstacle, send information to the second load-balanced server by secondary route entry to comprise: the first path first priority is changed to the first path second priority, wherein, first path second priority is lower than the second path first priority, judge whether secondary route entry is the route entry that multiple route entry medium priority is the highest, and when judging that secondary route entry is the highest route entry of multiple route entry medium priority, the second load-balanced server receives the information that secondary route entry forwards.
To achieve these goals, according to a further aspect in the invention, a kind of server load balancing device is provided.Server load balancing device is used for the load balancing of multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, secondary route entry is the path sending information between each web page server and the second load-balanced server, server load balancing device according to the present invention comprises: detecting unit, whether normal for detecting the first route entry communication, first transmitting element, for detecting in the normal situation of the first route entry communication, sends information to the first load-balanced server by the first route entry, and second transmitting element, for when detecting the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Further, detecting unit comprises: the second sending module, for sending Detection Information; Whether first detection module, receive via the first route entry feedack for detecting in Preset Time; Second detection module, for when receiving via the first route entry feedack, detects that the first route entry communication is normal; And the 3rd detection module, if for when not receiving via the first route entry feedack, detect the first route entry communication failure.
Further, web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, this device also comprises: configuration module, close the second port for opening the first port simultaneously.
Whether further, the first transmitting element comprises: the 4th detection module, open for detecting the first port; And sending module, for when detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
By the present invention, adopt the method comprised the following steps: whether normally detect the first route entry communication; If detect that the first route entry communication is normal, send information to the first load-balanced server by the first route entry; If detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.Whether normal by detecting the first route entry communication in the present invention, determine by the normal route entry of communication, information in web page server is sent to corresponding load-balanced server via the normal route entry of communication, solve when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it.
Accompanying drawing explanation
The accompanying drawing forming a application's part is used to provide a further understanding of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart according to server load balancing method of the present invention;
Fig. 2 is the schematic diagram of client-access web page server; And
Fig. 3 is the schematic diagram according to server load balancing device of the present invention.
Embodiment
It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.Below with reference to the accompanying drawings and describe the present invention in detail in conjunction with the embodiments.
The application's scheme is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present application, technical scheme in the embodiment of the present application is clearly and completely described, obviously, described embodiment is only the embodiment of the application's part, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all should belong to the scope of the application's protection.
It should be noted that, term " first ", " second " etc. in the specification of the application and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged, in the appropriate case so that the embodiment of the application described herein.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
Fig. 1 is the flow chart according to server load balancing method of the present invention.As shown in Figure 1, the method comprises following step S101 to step S105:
Whether normal step S101, detect the first route entry communication.
Web page server is multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, and secondary route entry is the path sending information between each web page server and the second load-balanced server.
Detect the first web page server whether normal to the first route entry communication of the first load-balanced server transmission information.
Particularly, the first web page server sends Detection Information via the first route entry to the first load-balanced server; Whether detect the first web page server in Preset Time receives via the first route entry feedack; If received via the first route entry feedack, detect that the first route entry communication is normal; If do not received via the first route entry feedack, detect the first route entry communication failure.
Step S102, if detect that the first route entry communication is normal, sends information to the first load-balanced server by the first route entry.
Web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, open the first port and close the second port simultaneously.
Such as, web page server configures different default routes: route A, give tacit consent to and all-network data are mail to lvs load-balanced server A; Route B, gives tacit consent to and all-network data is mail to lvs load-balanced server B; The Metrec value configuring route A on web page server be 1, route B Metrec value be 500, give tacit consent to all packets and all can be forwarded by route A.
It should be noted that namely lvs load-balanced server A is equivalent to above-mentioned first load-balanced server, lvs load-balanced server B, is namely equivalent to above-mentioned second load-balanced server.
Open the detected module of http agreement, on web page server, create detected port 9001 and 9002 by http agreement, and a static resource http is provided: // web page server 1/heartbeat/heartbert.gif is used for being detected.Lvs load-balanced server A detects 9001 ports; Lvs load-balanced server B detects 9002 ports.Particularly, for lvs load-balanced server A detection port 9001, access path (or being called root) is formed by the port created by http agreement above web page server and corresponding IP address, lvs load-balanced server according to this access path to static resource http: // web page server 1/heartbeat/heartbert.gif conducts interviews, if access static resource http: // web page server 1/heartbeat/heartbert.gif success, then illustrate that port 9001 is opened, if access static resource http: // web page server 1/heartbeat/heartbert.gif failure, then illustrate that port 9001 is closed.
Send information to the first load-balanced server by the first route entry to comprise: detect the first port and whether open; When detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
As above-mentioned example, detect route A by traceroute mode, if route A testing result is can normal communication, the metrec value of so current route A is constant, when 9001 ports are opened, web page server sends information via route A to lvs load-balanced server A.
Step S103, if detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that obstacle appears in the first route entry communication, before passing through secondary route entry forwarding information to the second load-balanced server, close the first port and open the second port simultaneously.First route entry has the first default path first priority and the first path second priority, first path first priority represents that execution web page server sends the priority of message to the first load-balanced server via the first route entry, secondary route entry has the second path first priority, second path first priority represents the priority sending message via secondary route entry to the second load-balanced server, second path first priority is lower than the first path first priority, if detect the first route entry communication obstacle, send information to the second load-balanced server by secondary route entry to comprise: the first path first priority is changed to the first path second priority, wherein, first path second priority is lower than the second path first priority, judge whether secondary route entry is the route entry that multiple route entry medium priority is the highest, when judging that secondary route entry is the highest route entry of multiple route entry medium priority, the second load-balanced server receives the information that secondary route entry forwards.
Such as, web page server configures different default routes: route A, give tacit consent to and all-network data are mail to lvs load-balanced server A; Route B, gives tacit consent to and all-network data is mail to lvs load-balanced server B; It should be noted that namely lvs load-balanced server A is equivalent to above-mentioned first load-balanced server, lvs load-balanced server B, is namely equivalent to above-mentioned second load-balanced server.The Metrec value configuring route A on web page server be 1, route B Metrec value be 500, give tacit consent to all packets and all can be forwarded by route A.It should be noted that, namely the Metrec value 1 of route A is equivalent to the first path first priority, and namely the Metrec value 500 of route B is equivalent to the second path first priority.
Route A is detected by traceroute mode, if route A testing result is can not normal communication, so the metrec value of route A changes to 1000, namely the Metrec value 1000 of route A is equivalent to the first path second priority, again because the metrec value 1000 of route A is greater than the metrec value 500 of route B, namely the first path second priority is lower than the second path first priority.So all packet informations all can have route B to be forwarded to lvs load-balanced server B.
Fig. 2 is client-access web page server schematic diagram.As shown in Figure 2, when client computer 200 accesses domain name A, need the host server IP obtaining domain name A binding.Flow process is as follows: first, and client computer 200 sends request instruction to recursion server 100 (that is: local bandwidth operator server), and recursion server 100 request is to resolution server; Then, all poll host server IP that domain name arranges by resolution server return to recursion server 100, these IP are returned to client computer 200 by recursion server 100 again, and finally, the browser of client computer 200 can to conduct interviews web page server by one of them IP of random access.
Under lvs load balancing nat pattern, lvs load-balanced server 300 request instruction can only be transmitted to web page server 400 or request instruction can only be transmitted to web page server 402 or 403 by 401, lvs load-balanced server 301.Namely, web page server can only a corresponding lvs load-balanced server, and a lvs load-balanced server can corresponding multiple stage web page server.By detecting route entry on web page server in the present invention, whether communication is normal, in the normal situation of communication, web page server sends information via this route entry to the loaded server that this route entry is corresponding, when detecting this route entry communication obstacle, web page server sends information via another route entry to the load-balanced server that this another route entry is corresponding, achieve lvs load-balanced server 300 or 301, when wherein a load-balanced server goes wrong, web page server 400,401,402 and 403 can normally work.
Whether the server load balancing method that the embodiment of the present invention provides is normal by detecting the first route entry communication; If detect that the first route entry communication is normal, send information to the first load-balanced server by the first route entry; If detect the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.By the present invention, solve when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it.Achieve when arbitrary cover load-balanced server goes wrong, web page server corresponding with it can normally work.
It should be noted that, can perform in the computer system of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing, and, although show logical order in flow charts, but in some cases, can be different from the step shown or described by order execution herein.
The embodiment of the present invention additionally provides a kind of server load balancing device, it should be noted that, the server load balancing device of the embodiment of the present invention may be used for performing that the embodiment of the present invention provides for server load balancing method.Below the server load balancing device that the embodiment of the present invention provides is introduced.
Fig. 3 is the schematic diagram according to server load balancing device of the present invention.As shown in Figure 3, this device comprises: detecting unit 10, first transmitting element 20 and the second transmitting element 30.
Wherein, server load balancing device is used for the load balancing of multiple web page server, multiple route entry is provided with between each web page server and load-balanced server, load-balanced server comprises the first load-balanced server and the second load-balanced server, multiple route entry comprises the first route entry and secondary route entry, first route entry is the path sending information between each web page server and the first load-balanced server, and secondary route entry is the path sending information between each web page server and the second load-balanced server.
Whether detecting unit 10 is normal for detecting the first route entry communication.
Preferably, in the server load balancing device that the embodiment of the present invention provides, detecting unit comprises: the second sending module, for sending Detection Information; Whether first detection module, receive via the first route entry feedack for detecting in Preset Time; Second detection module, for when receiving via the first route entry feedack, detects that the first route entry communication is normal; And the 3rd detection module, if for when not receiving via the first route entry feedack, detect the first route entry communication failure.
First transmitting element 20, for detecting in the normal situation of the first route entry communication, sends information to the first load-balanced server by the first route entry.
Web page server is provided with the first port and the second port, wherein, open first end oral thermometer and show that permission web page server sends information via the first route entry to the first load-balanced server, close first end oral thermometer to show and do not allow web page server to send information via the first route entry to the first load-balanced server, open the second port and represent that permission web page server sends information via secondary route entry to the second load-balanced server, close the second port represent do not allow web page server via secondary route entry to second load-balanced server send information, after detecting that the first route entry communication is normal, before sending information to the first load-balanced server by the first route entry, device also comprises: configuration module, close the second port for opening the first port simultaneously.Whether the first transmitting element comprises: the 4th detection module, open for detecting the first port; Sending module, for when detecting that the first port is unlatching, by the first route entry forwarding information to the first load-balanced server.
Second transmitting element 30, for when detecting the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.
Whether normal the server load balancing device that the embodiment of the present invention provides, detect the first route entry communication by detecting unit 10; First transmitting element 20, detecting in the normal situation of the first route entry communication, sends information to the first load-balanced server by the first route entry; Second transmitting element 30 when detecting the first route entry communication obstacle, by secondary route entry forwarding information to the second load-balanced server.By the present invention, solve when a set of load-balanced server delays machine, the problem of web page server cisco unity malfunction corresponding with it.Achieve when arbitrary load-balanced server goes wrong, web page server corresponding with it can normally work.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a server load balancing method, described server load balancing method is used for the load balancing of multiple web page server, multiple route entry is provided with between each described web page server and load-balanced server, described load-balanced server comprises the first load-balanced server and the second load-balanced server, described multiple route entry comprises the first route entry and secondary route entry, described first route entry is the path sending information between each described web page server and described first load-balanced server, described secondary route entry is the path sending information between each described web page server and described second load-balanced server, it is characterized in that, described method comprises:
Whether normally detect described first route entry communication;
If detect that described first route entry communication is normal, send information to described first load-balanced server by described first route entry; And
If detect described first route entry communication obstacle, by described secondary route entry forwarding information to described second load-balanced server.
2. whether method according to claim 1, is characterized in that, detect described first route entry communication and normally comprise:
Send Detection Information;
Whether detect receives via described first route entry feedack in Preset Time;
If received via described first route entry feedack, detect that described first route entry communication is normal; And
If do not received via described first route entry feedack, detect described first route entry communication failure.
3. method according to claim 1, it is characterized in that, described web page server is provided with the first port and the second port, wherein, open described first end oral thermometer and show that the described web page server of permission sends information via described first route entry to described first load-balanced server, close described first end oral thermometer show do not allow described web page server via described first route entry to described first load-balanced server send information, open described second port and represent that the described web page server of permission sends information via described secondary route entry to described second load-balanced server, close described second port represent do not allow described web page server via described secondary route entry to described second load-balanced server send information, after detecting that described first route entry communication is normal, before sending information to described first load-balanced server by described first route entry, described method also comprises:
Open described first port and close described second port simultaneously.
4. method according to claim 3, is characterized in that, sends information to described first load-balanced server comprise by described first route entry:
Detect described first port whether to open; And
When detecting that described first port is unlatching, by described first route entry forwarding information to described first load-balanced server.
5. method according to claim 1, it is characterized in that, described web page server is provided with the first port and the second port, wherein, open described first end oral thermometer and show that the described web page server of permission sends information via described first route entry to described first load-balanced server, close described first end oral thermometer show do not allow described web page server via described first route entry to described first load-balanced server send information, open described second port and represent that the described web page server of permission sends information via described secondary route entry to described second load-balanced server, close described second port represent do not allow described web page server via described secondary route entry to described second load-balanced server send information, after detecting that obstacle appears in described first route entry communication, before by described secondary route entry forwarding information to described second load-balanced server, described method also comprises:
Close described first port and open described second port simultaneously.
6. method according to claim 5, it is characterized in that, described first route entry has the first default path first priority and the first path second priority, described first path first priority represents that the described web page server of execution sends the priority of message to described first load-balanced server via described first route entry, described secondary route entry has the second path first priority, described second path first priority represents the priority sending message via described secondary route entry to described second load-balanced server, described second path first priority is lower than described first path first priority, if detect described first route entry communication obstacle, send information to described second load-balanced server by described secondary route entry to comprise:
First path first priority is changed to the first path second priority, and wherein, described first path second priority is lower than described second path first priority;
Judge whether described secondary route entry is the route entry that described multiple route entry medium priority is the highest; And
When judging that described secondary route entry is the highest route entry of described multiple route entry medium priority, described second load-balanced server receives the information that described secondary route entry forwards.
7. a server load balancing device, described server load balancing device is used for the load balancing of multiple web page server, multiple route entry is provided with between each described web page server and load-balanced server, described load-balanced server comprises the first load-balanced server and the second load-balanced server, described multiple route entry comprises the first route entry and secondary route entry, described first route entry is the path sending information between each described web page server and described first load-balanced server, described secondary route entry is the path sending information between each described web page server and described second load-balanced server, it is characterized in that, comprise:
Whether detecting unit is normal for detecting described first route entry communication;
First transmitting element, for detecting in the normal situation of described first route entry communication, sends information to described first load-balanced server by described first route entry; And
Second transmitting element, for when detecting described first route entry communication obstacle, by described secondary route entry forwarding information to described second load-balanced server.
8. device according to claim 7, is characterized in that, described detecting unit comprises:
Second sending module, for sending Detection Information;
Whether first detection module, receive via described first route entry feedack for detecting in Preset Time;
Second detection module, for when receiving via described first route entry feedack, detects that described first route entry communication is normal; And
3rd detection module, if for when not receiving via described first route entry feedack, detects described first route entry communication failure.
9. device according to claim 7, it is characterized in that, described web page server is provided with the first port and the second port, wherein, open described first end oral thermometer and show that the described web page server of permission sends information via described first route entry to described first load-balanced server, close described first end oral thermometer show do not allow described web page server via described first route entry to described first load-balanced server send information, open described second port and represent that the described web page server of permission sends information via described secondary route entry to described second load-balanced server, close described second port represent do not allow described web page server via described secondary route entry to described second load-balanced server send information, after detecting that described first route entry communication is normal, before sending information to described first load-balanced server by described first route entry, described device also comprises:
Configuration module, closes described second port for opening described first port simultaneously.
10. device according to claim 9, is characterized in that, described first transmitting element comprises:
Whether the 4th detection module, open for detecting described first port; And
Sending module, for when detecting that described first port is unlatching, by described first route entry forwarding information to described first load-balanced server.
CN201410659288.3A 2014-11-18 2014-11-18 Server load balancing method and apparatus Active CN104410677B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410659288.3A CN104410677B (en) 2014-11-18 2014-11-18 Server load balancing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410659288.3A CN104410677B (en) 2014-11-18 2014-11-18 Server load balancing method and apparatus

Publications (2)

Publication Number Publication Date
CN104410677A true CN104410677A (en) 2015-03-11
CN104410677B CN104410677B (en) 2017-12-19

Family

ID=52648275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410659288.3A Active CN104410677B (en) 2014-11-18 2014-11-18 Server load balancing method and apparatus

Country Status (1)

Country Link
CN (1) CN104410677B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107547394A (en) * 2017-08-14 2018-01-05 新华三信息安全技术有限公司 A kind of load-balancing device dispositions method more living and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420381A (en) * 2008-12-08 2009-04-29 杭州华三通信技术有限公司 Method and apparatus for enhancing forwarding reliability in VRRP load balance
CN101741850A (en) * 2009-12-25 2010-06-16 北京邮电大学 Multitask concurrent executive system and method for hybrid network service
CN102143046A (en) * 2010-08-25 2011-08-03 华为技术有限公司 Load balancing method, equipment and system
CN102387218A (en) * 2011-11-24 2012-03-21 浪潮电子信息产业股份有限公司 Multimachine hot standby load balance system for computer
CN102510407A (en) * 2011-11-22 2012-06-20 沈文策 Method and system for reading and writing microblog
CN102968310A (en) * 2012-12-05 2013-03-13 武汉烽火普天信息技术有限公司 Integrated high-performance application software architecture and construction method thereof
CN103346923A (en) * 2013-07-30 2013-10-09 曙光信息产业(北京)有限公司 Management method and management device for double-unit load balancing equipment
CN103795805A (en) * 2014-02-27 2014-05-14 中国科学技术大学苏州研究院 Distributed server load balancing method based on SDN

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101420381A (en) * 2008-12-08 2009-04-29 杭州华三通信技术有限公司 Method and apparatus for enhancing forwarding reliability in VRRP load balance
CN101741850A (en) * 2009-12-25 2010-06-16 北京邮电大学 Multitask concurrent executive system and method for hybrid network service
CN102143046A (en) * 2010-08-25 2011-08-03 华为技术有限公司 Load balancing method, equipment and system
CN102510407A (en) * 2011-11-22 2012-06-20 沈文策 Method and system for reading and writing microblog
CN102387218A (en) * 2011-11-24 2012-03-21 浪潮电子信息产业股份有限公司 Multimachine hot standby load balance system for computer
CN102968310A (en) * 2012-12-05 2013-03-13 武汉烽火普天信息技术有限公司 Integrated high-performance application software architecture and construction method thereof
CN103346923A (en) * 2013-07-30 2013-10-09 曙光信息产业(北京)有限公司 Management method and management device for double-unit load balancing equipment
CN103795805A (en) * 2014-02-27 2014-05-14 中国科学技术大学苏州研究院 Distributed server load balancing method based on SDN

Also Published As

Publication number Publication date
CN104410677B (en) 2017-12-19

Similar Documents

Publication Publication Date Title
CN112470436B (en) Systems, methods, and computer-readable media for providing multi-cloud connectivity
US10237230B2 (en) Method and system for inspecting network traffic between end points of a zone
US10187263B2 (en) Integrating physical and virtual network functions in a service-chained network environment
US20180324245A1 (en) Load Balancing Method, Apparatus, and System
US9729441B2 (en) Service function bundling for service function chains
He et al. Next stop, the cloud: Understanding modern web service deployment in ec2 and azure
KR100900491B1 (en) Method and apparatus for blocking distributed denial of service
JP7058270B2 (en) Routing within a hybrid network
CN107483574B (en) Data interaction system, method and device under load balance
US20190215308A1 (en) Selectively securing a premises network
EP3021534A1 (en) A network controller and a computer implemented method for automatically define forwarding rules to configure a computer networking device
US20150058983A1 (en) Revival and redirection of blocked connections for intention inspection in computer networks
US20170034174A1 (en) Method for providing access to a web server
EP3720075B1 (en) Data transmission method and virtual switch
CN105939239B (en) Data transmission method and device of virtual network card
CN108259425A (en) The determining method, apparatus and server of query-attack
US11171809B2 (en) Identity-based virtual private network tunneling
CN104270291A (en) Content delivery network (CDN) quality monitoring method
KR20170005129A (en) Network packet encapsulation and routing
US10587521B2 (en) Hierarchical orchestration of a computer network
WO2016108140A1 (en) Ccn fragmentation gateway
US20180034768A1 (en) Translating Network Attributes of Packets in a Multi-Tenant Environment
CN109347670A (en) Route tracing method and device, electronic equipment, storage medium
EP2775676A1 (en) Policy based routing method and device
Le et al. Experiences deploying a transparent split tcp middlebox and the implications for nfv

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Load balancing method and apparatus for edge EPG server, implementing method for user login

Effective date of registration: 20190531

Granted publication date: 20171219

Pledgee: Shenzhen Black Horse World Investment Consulting Co.,Ltd.

Pledgor: BEIJING GRIDSUM TECHNOLOGY Co.,Ltd.

Registration number: 2019990000503

PE01 Entry into force of the registration of the contract for pledge of patent right
CP02 Change in the address of a patent holder

Address after: 100083 No. 401, 4th Floor, Haitai Building, 229 North Fourth Ring Road, Haidian District, Beijing

Patentee after: BEIJING GRIDSUM TECHNOLOGY Co.,Ltd.

Address before: 100086 Beijing city Haidian District Shuangyushu Area No. 76 Zhichun Road cuigongfandian 8 layer A

Patentee before: BEIJING GRIDSUM TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder
PP01 Preservation of patent right

Effective date of registration: 20240604

Granted publication date: 20171219