CN104301439A - Load balancing method, device and system - Google Patents

Load balancing method, device and system Download PDF

Info

Publication number
CN104301439A
CN104301439A CN201410642193.0A CN201410642193A CN104301439A CN 104301439 A CN104301439 A CN 104301439A CN 201410642193 A CN201410642193 A CN 201410642193A CN 104301439 A CN104301439 A CN 104301439A
Authority
CN
China
Prior art keywords
server
data list
client
distribution server
end server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410642193.0A
Other languages
Chinese (zh)
Other versions
CN104301439B (en
Inventor
赵铁雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gridsum Technology Co Ltd
Original Assignee
Beijing Gridsum Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gridsum Technology Co Ltd filed Critical Beijing Gridsum Technology Co Ltd
Priority to CN201410642193.0A priority Critical patent/CN104301439B/en
Publication of CN104301439A publication Critical patent/CN104301439A/en
Application granted granted Critical
Publication of CN104301439B publication Critical patent/CN104301439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a load balancing method, a load balancing device and a load balancing system. The method comprises the following steps: an allocation server receives a request instruction from a client, wherein the request instruction is used for making a request of acquiring information about a currently available front-end server to the allocation server, and the front-end server is used for providing a data access channel for the access of the client to the application server; the allocation server transmits a data list to the client after receiving the request instruction, and the client accesses the front-end server according to the data list, wherein the data list is a list for storing the information about the currently available front-end server. According to the method, the device and the system, the problem of higher influence in case of single-point failure of the server is solved, and the influence caused in case of single-point failure of the server is effectively reduced.

Description

Load-balancing method, Apparatus and system
Technical field
The present invention relates to internet arena, in particular to a kind of load-balancing method, Apparatus and system.
Background technology
Usually, load-balancing technique is used for the load balancing between server.When the problem of Single Point of Faliure appears in server, generally can use one " Floating IP address " in prior art or make " empty IP " mutual for (or how standbyly crying) for load equalizer does redundancy, both, when a load equalizer lost efficacy time, this " Floating IP address " had been floated on an other load equalizer.This technical scheme can solve most problem, but performs once the interruption that " Floating IP address " can bring general 5 seconds to about 10 seconds in this scenario, and server can cause the interruption of one-period, and the impact caused is larger.
For in correlation technique when server Single Point of Faliure, affect larger problem, not yet propose effective solution at present.
Summary of the invention
Main purpose of the present invention is to provide a kind of load-balancing method, Apparatus and system, to solve when server Single Point of Faliure, affects larger problem.
To achieve these goals, according to an aspect of the present invention, a kind of load-balancing method is provided.Load-balancing method according to the present invention comprises: distribution server receives the request instruction from client, wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server; Distribution server is after receiving request instruction, and distribution server sends data list to client, and wherein, data list is the list storing current available front-end server information, and client accesses front-end server according to data list.
Further, after distribution server sends data list to client, the method also comprises: distribution server obtains the information of the current available front-end server upgraded; Distribution server upgrades data list, and wherein, the current available front-end server information according to the renewal got upgrades data list; And distribution server sends the data list after upgrading to client.
Further, distribution server is multiple distribution server, multiple distribution server comprises the first distribution server and the second distribution server, data list is multiple data list, multiple data list comprises the first data list and the second data list, the current available front-end server information that first data list obtains for storing the first distribution server, the current available front-end server information that second data list obtains for storing the second distribution server, wherein, first data list is identical with the information that the second data list stores, distribution server is after receiving request instruction, distribution server sends data list and comprises to client: judge that whether access first distribution server is successful, if judge the success of access first distribution server, first distribution server sends the first data list to client, if judge that access first distribution server is unsuccessful, second distribution server receives the first instruction, wherein, the first instruction is the access instruction conducted interviews to the second distribution server that distribution server prestores, and second distribution server send the second data list to client.
Further, request instruction is the instruction conducted interviews to the first distribution server by the domain name that the first distribution server is corresponding.
Further, the first instruction is the instruction conducted interviews to the second distribution server by the IP network location that the second distribution server is corresponding.
To achieve these goals, according to another aspect of the present invention, a kind of load-balancing method is additionally provided.Load-balancing method according to the present invention comprises: client sends request instruction to distribution server, wherein, request instruction is the instruction of client to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server; Client receives data list, and wherein, data list is the list storing current available front-end server information, and distribution server is also for sending data list to client; And client accesses front-end server according to data list.
Further, front-end server comprises multiple front-end server, and multiple front-end server comprises the first front-end server and the second front-end server, and client comprises according to data list access front-end server: client obtains data list; Client is according to data list, determine the connection request instruction of connection first front-end server, wherein, connection request instruction is used for the request instruction of client-requested connected reference front-end server, and the availability of the first front-end server is higher than the availability of the second front-end server; And client accesses the first front-end server by connection request instruction.
To achieve these goals, according to a further aspect in the invention, a kind of load balancing apparatus is provided.Load balancing apparatus according to the present invention comprises: receiving element, for receiving the request instruction from client, wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server; And transmitting element, for after receiving request instruction, send data list to client, wherein, data list is the list storing current available front-end server information, and client accesses front-end server according to data list.
To achieve these goals, according to a further aspect in the invention, a kind of load balancing apparatus is additionally provided.Load balancing apparatus according to the present invention comprises: transmitting element, for sending request instruction to distribution server, wherein, request instruction is the instruction of client to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server; Receiving element, for receiving data list, wherein, data list is the list storing current available front-end server information, and distribution server is also for sending data list to client; And addressed location, for accessing front-end server according to data list.
To achieve these goals, according to a further aspect in the invention, a kind of SiteServer LBS is provided.SiteServer LBS according to the present invention comprises: client, for conducting interviews to distribution server, receive the data list that distribution server sends, data list is the list storing current available front-end server information, and client accesses front-end server according to data list; Distribution server, for receiving the request instruction from client, wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, distribution server is after receiving request instruction, distribution server sends data list to client, and front-end server, for providing data access passage for client-access application server.
Pass through the present invention, adopt the method comprised the following steps, the method comprises: distribution server receives the request instruction from client, wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server; Distribution server is after receiving request instruction, and distribution server sends data list to client, and wherein, data list is the list storing current available front-end server information, and client accesses front-end server according to data list.The data list of the information storing current available front-end server is got by client of the present invention, according to this data list, access front-end server, solves when server Single Point of Faliure, affect larger problem, the impact caused when significantly reducing server Single Point of Faliure.
Accompanying drawing explanation
The accompanying drawing forming a application's part is used to provide a further understanding of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of load-balancing method according to a first embodiment of the present invention;
Fig. 2 is the flow chart of load-balancing method according to a second embodiment of the present invention;
Fig. 3 is the schematic diagram of load balancing apparatus according to a first embodiment of the present invention;
Fig. 4 is the schematic diagram of load balancing apparatus according to a second embodiment of the present invention; And
Fig. 5 is the schematic diagram according to a kind of SiteServer LBS of the present invention.
Embodiment
It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.Below with reference to the accompanying drawings and describe the present invention in detail in conjunction with the embodiments.
The application's scheme is understood better in order to make those skilled in the art person, below in conjunction with the accompanying drawing in the embodiment of the present application, technical scheme in the embodiment of the present application is clearly and completely described, obviously, described embodiment is only the embodiment of the application's part, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all should belong to the scope of the application's protection.
It should be noted that, term " first ", " second " etc. in the specification of the application and claims and above-mentioned accompanying drawing are for distinguishing similar object, and need not be used for describing specific order or precedence.Should be appreciated that the data used like this can be exchanged, in the appropriate case so that the embodiment of the application described herein.In addition, term " comprises " and " having " and their any distortion, intention is to cover not exclusive comprising, such as, contain those steps or unit that the process of series of steps or unit, method, system, product or equipment is not necessarily limited to clearly list, but can comprise clearly do not list or for intrinsic other step of these processes, method, product or equipment or unit.
In the present invention, part technical term is explained as follows:
Load balancing (being also called load balancing), English name is Load Balance, task carried out balance, share on multiple operating unit and perform, such as web page server, ftp server, enterprise crucial application server and other mission critical server etc., thus task of jointly finishing the work.
Server end (Server): in broad terms, server refers to the computer system that some can be provided to serve to other machine in network.If a computer is served providing ftp outside server end, also server can be.
Client (Client) or be called user side: refer to corresponding with server, for client provides the program of local service.Except some are only except the application program of local runtime, be generally arranged in common client computer, needing works in coordination with service end runs.After development of Internet, comparatively conventional client comprises: the web browser that World Wide Web (WWW) uses, and receives email client when posting Email, and the client software etc. of instant messaging.For this class application program, need have corresponding server and service routine in network to provide corresponding service, as database service, E-mail service etc., like this at client and server end, need to set up specific communication connection, ensure the normal operation of application program.
Single Point of Faliure (Single Point of Failure), from the literal fault that can be understood as a single point and occur, be usually applied to computer system and network, in the entire network, as long as have a place or a station server to occur problem, whole network has just all been paralysed.
Fig. 1 is the flow chart of load-balancing method according to a first embodiment of the present invention.As shown in Figure 1, the method comprises following step S101 to step S103:
Step S101, client sends request instruction to distribution server.
Client sends request instruction to distribution server, and wherein, request instruction is the instruction of client to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server.Request instruction is the instruction conducted interviews to the first distribution server by the domain name that distribution server is corresponding.
Such as, client carries out request access by http://www.sina.com to distribution server, and namely client sends request instruction to the distribution server that www.sina.com is corresponding.This distribution server for obtaining the relevant information of current available front-end server, particularly, as information such as server name, IP address, server state and availabilities.And this relevant information is stored in data list.If client is by www.sina.com to the success of distribution server request access, distribution server sends this data list to this client.
Step S102, client receives data list.
Distribution server sends this data list to this client, and client receives data list.Data list is the list storing current available front-end server information.The data list that client receives can be the different data list that different distribution server sends, but the data message that different data lists stores is identical, because the current available front-end server information that different distribution servers gets is identical.
Step S103, client accesses front-end server according to data list.
After client receives data list, from the data list storing current available front-end server, select a front-end server access front-end server.
Preferably, in order to promote the efficiency of client access front-end server, in the load-balancing method that the embodiment of the present invention provides, client comprises according to data list access front-end server: front-end server comprises multiple front-end server, and multiple front-end server comprises the first front-end server and the second front-end server.Client obtains data list; Client is according to data list, determine the connection request instruction of connection first front-end server, wherein, connection request instruction is used for the request instruction of client-requested connected reference front-end server, and the availability of the first front-end server is higher than the availability of the second front-end server; And client accesses the first front-end server by connection request instruction.
By the relevant information to the current available front-end server stored in data list, the front-end server selecting availability best, as access server, improves the efficiency of client access front-end server.
A kind of load-balancing method that the embodiment of the present invention provides, sends request instruction to distribution server by client, and client receives data list, and client accesses front-end server according to data list.The data list of the information storing current available front-end server is got by client of the present invention, according to this data list, access front-end server, solves when server Single Point of Faliure, affect larger problem, the impact caused when significantly reducing server Single Point of Faliure.
Fig. 2 is the flow chart of load-balancing method according to a second embodiment of the present invention.As shown in Figure 2, the method comprises following step S201 to step S202:
Step S201, distribution server receives the request instruction from client.
Distribution server receives the request instruction from client, and wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server.
Step S202, distribution server is after receiving request instruction, and distribution server sends data list to client.
Distribution server is after receiving request instruction, and distribution server sends data list to client, and wherein, data list is the list storing current available front-end server information, and client is according to this data list access front-end server.
Particularly, distribution server is multiple distribution server, multiple distribution server comprises the first distribution server and the second distribution server, data list is multiple data list, multiple data list comprises the first data list and the second data list, first data list stores the current available front-end server information that the first distribution server obtains, second data list stores the current available front-end server information that the second distribution server obtains, wherein, first data list is identical with the information that the second data list stores, it is characterized in that, distribution server is after receiving request instruction, distribution server sends data list and comprises to client: judge that whether access first distribution server is successful, if judge the success of access first distribution server, first distribution server sends the first data list to client, if judge that access first distribution server is unsuccessful, second distribution server receives the first instruction, wherein, the first instruction is the access instruction conducted interviews to the second distribution server that distribution server prestores, and second distribution server send the second data list to client.
Request instruction is the instruction conducted interviews to the first distribution server by the domain name that the first distribution server is corresponding.First instruction is the instruction conducted interviews to the second distribution server by the IP network location that the second distribution server is corresponding.
First instruction is if judge that access first distribution server is unsuccessful, the instruction that distribution server self prestores, and this instruction is used for conducting interviews in the mode of IP address to the second distribution server.Thus ensure that when the first distribution server occur domain name mapping unsuccessfully etc. fault time, client can also have access to the second distribution server, thus obtains the data list storing current available front-end server.
Preferably, in order to ensure the accuracy of data list information, in the load-balancing method that the embodiment of the present invention provides, after distribution server sends data list to client, method also comprises: distribution server obtains the information of the current available front-end server upgraded; Distribution server upgrades data list, and wherein, the current available front-end server information according to the renewal got upgrades data list; And distribution server sends the data list after upgrading to client.
By periodically upgrading data list, and being sent to client, ensure that the accuracy of data list information improves the accuracy of client according to this data list access front-end server simultaneously.
A kind of load-balancing method that the embodiment of the present invention provides, receives the request instruction from client by distribution server, and distribution server is after receiving request instruction, and distribution server sends data list to client.The data list of the information storing current available front-end server is got by client of the present invention, according to this data list, access front-end server, solves when server Single Point of Faliure, affect larger problem, the impact caused when significantly reducing server Single Point of Faliure.
It should be noted that, can perform in the computer system of such as one group of computer executable instructions in the step shown in the flow chart of accompanying drawing, and, although show logical order in flow charts, but in some cases, can be different from the step shown or described by order execution herein.
The present invention also provides a kind of load balancing apparatus, and this load balancing apparatus is arranged on distribution server, or as distribution server, is introduced below to load balancing apparatus:
Fig. 3 is the schematic diagram of load balancing apparatus according to a first embodiment of the present invention.As shown in Figure 3, this device comprises: receiving element 10 and transmitting element 12.
Receiving element 10, for receiving the request instruction from client, wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server.
Transmitting element 12, for after receiving request instruction, send data list to client, wherein, data list is the list storing current available front-end server information, and client accesses front-end server according to data list.
The load balancing apparatus that the embodiment of the present invention provides, the request instruction from client is received by receiving element 10, wherein, request instruction is to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server; Transmitting element 12 is after receiving request instruction, and send data list to client, wherein, data list is the list storing current available front-end server information, and client accesses front-end server according to data list.By the present invention, solve when server Single Point of Faliure, affect larger problem, the impact caused when significantly reducing server Single Point of Faliure.
The present invention also provides a kind of load balancing apparatus, and this load balancing apparatus is arranged in client, or as client, is introduced below to load balancing apparatus.
Fig. 4 is the schematic diagram of load balancing apparatus according to a second embodiment of the present invention.As shown in Figure 4, this device comprises: transmitting element 20, receiving element 22 and addressed location 24.
Transmitting element 20, for sending request instruction to distribution server, wherein, request instruction is the instruction of client to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server.
Receiving element 22, for receiving data list, wherein, data list is the list storing current available front-end server information, and distribution server is also for sending data list to client.
Addressed location 24, for accessing front-end server according to data list.
The load balancing apparatus that the embodiment of the present invention provides, instruction is sent request to distribution server by transmitting element 20, wherein, request instruction is the instruction of client to the information of the current available front-end server of distribution server acquisition request, and front-end server is used for providing data access passage for client-access application server.Receiving element 22 receives data list, and wherein, data list is the list storing current available front-end server information, and distribution server is also for sending data list to client.Addressed location 24 is according to data list access front-end server.By the present invention, solve when server Single Point of Faliure, affect larger problem, the impact caused when significantly reducing server Single Point of Faliure.
The embodiment of the present invention additionally provides a kind of SiteServer LBS, it should be noted that, the SiteServer LBS of the embodiment of the present invention may be used for performing that the embodiment of the present invention provides for load-balancing method.Below the SiteServer LBS that the embodiment of the present invention provides is introduced.
Fig. 5 is the schematic diagram according to a kind of SiteServer LBS of the present invention.As shown in Figure 5, this system comprises: client 100, distribution server 200 and front-end server 300.
Client 100, for conducting interviews to distribution server 200, receive the data list that distribution server 200 sends, data list is the list storing current available front-end server 300 information, and client 100 is according to data list access front-end server 300.
Distribution server 200, for receiving the request instruction from client 100, wherein, request instruction is to the information of the current available front-end server 300 of distribution server 200 acquisition request, distribution server 200 is after receiving request instruction, and distribution server 200 sends data list to client 100.
Front-end server 300, for providing data access passage for client 100 access application server.
The SiteServer LBS that the embodiment of the present invention provides, conducted interviews by client 100 pairs of distribution servers 200, receive the data list that distribution server 200 sends, data list is the list storing current available front-end server 300 information, and client 100 is according to data list access front-end server 300; Distribution server 200 receives the request instruction from client 100, wherein, request instruction is to the information of the current available front-end server 300 of distribution server 200 acquisition request, and distribution server 200 is after receiving request instruction, and distribution server 200 sends data list to client 100; Front-end server 300 provides data access passage for client 100 access application server.By the present invention, solve when server Single Point of Faliure, affect larger problem, the impact caused when significantly reducing server Single Point of Faliure.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a load-balancing method, is characterized in that, comprising:
Distribution server receives the request instruction from client, wherein, described request instruction is to the information of the current available front-end server of described distribution server acquisition request, and described front-end server is used for providing data access passage for described client-access application server; And
Described distribution server is after receiving described request instruction, described distribution server sends data list to described client, wherein, described data list is the list storing current available front-end server information, and described client is according to described data list access front-end server.
2. method according to claim 1, is characterized in that, after described distribution server sends data list to described client, described method also comprises:
Described distribution server obtains the information of the current available front-end server upgraded;
Described distribution server upgrades described data list, and wherein, the current available front-end server information according to the renewal got upgrades described data list; And
Described distribution server sends the described data list after upgrading to described client.
3. method according to claim 1, described distribution server is multiple distribution server, described multiple distribution server comprises the first distribution server and the second distribution server, described data list is multiple data list, described multiple data list comprises the first data list and the second data list, the current available front-end server information that described first data list obtains for storing described first distribution server, the current available front-end server information that described second data list obtains for storing described second distribution server, wherein, described first data list is identical with the information that described second data list stores, it is characterized in that, described distribution server is after receiving described request instruction, described distribution server sends data list to described client and comprises:
Judge that whether described first distribution server of access is successful;
If judge the described first distribution server success of access, described first distribution server sends described first data list to described client,
If judge that described first distribution server of access is unsuccessful, described second distribution server receives the first instruction, and wherein, described first instruction is the access instruction conducted interviews to described second distribution server that described distribution server prestores; And
Described second distribution server sends described second data list to described client.
4. method according to claim 3, is characterized in that, described request instruction is the instruction conducted interviews to described first distribution server by the domain name that described first distribution server is corresponding.
5. method according to claim 3, described in it is characterized in that, described first instruction is the instruction conducted interviews to described second distribution server by the IP network location that described second distribution server is corresponding.
6. a load-balancing method, is characterized in that, comprising:
Client sends request instruction to distribution server, wherein, described request instruction is the instruction of described client to the information of the current available front-end server of described distribution server acquisition request, and described front-end server is used for providing data access passage for described client-access application server;
Described client receives data list, and wherein, described data list is the list storing current available front-end server information, and described distribution server is also for sending described data list to described client; And
Described client accesses described front-end server according to described data list.
7. method according to claim 6, it is characterized in that, described front-end server comprises multiple front-end server, and described multiple front-end server comprises the first front-end server and the second front-end server, and described client accesses described front-end server according to described data list and comprises:
Described client obtains described data list;
Described client is according to described data list, determine the connection request instruction connecting described first front-end server, wherein, described connection request instruction is used for the request instruction of front-end server described in described client-requested connected reference, and the availability of described first front-end server is higher than the availability of described second front-end server; And
Described client is by described first front-end server of described connection request instruction access.
8. a load balancing apparatus, is characterized in that, comprising:
Receiving element, for receiving the request instruction from client, wherein, described request instruction is to the information of the current available front-end server of distribution server acquisition request, and described front-end server is used for providing data access passage for described client-access application server; And
Transmitting element, for after receiving described request instruction, send data list to described client, wherein, described data list is the list storing current available front-end server information, and described client is according to described data list access front-end server.
9. a load balancing apparatus, is characterized in that, comprising:
Transmitting element, for sending request instruction to distribution server, wherein, described request instruction is the instruction of client to the information of the current available front-end server of described distribution server acquisition request, and described front-end server is used for providing data access passage for described client-access application server;
Receiving element, for receiving data list, wherein, described data list is the list storing current available front-end server information, and described distribution server is also for sending described data list to described client; And
Addressed location, for accessing described front-end server according to described data list.
10. a SiteServer LBS, is characterized in that, comprising:
Client, for conducting interviews to distribution server, receive the data list that described distribution server sends, described data list is the list storing current available front-end server information, and described client accesses described front-end server according to described data list;
Distribution server, for receiving the request instruction from client, wherein, described request instruction is to the information of the current available front-end server of described distribution server acquisition request, described distribution server is after receiving described request instruction, described distribution server sends described data list to described client, and
Front-end server, for providing data access passage for described client-access application server.
CN201410642193.0A 2014-11-13 2014-11-13 Load-balancing method, apparatus and system Active CN104301439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410642193.0A CN104301439B (en) 2014-11-13 2014-11-13 Load-balancing method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410642193.0A CN104301439B (en) 2014-11-13 2014-11-13 Load-balancing method, apparatus and system

Publications (2)

Publication Number Publication Date
CN104301439A true CN104301439A (en) 2015-01-21
CN104301439B CN104301439B (en) 2019-02-26

Family

ID=52321002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410642193.0A Active CN104301439B (en) 2014-11-13 2014-11-13 Load-balancing method, apparatus and system

Country Status (1)

Country Link
CN (1) CN104301439B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615734A (en) * 2015-05-26 2018-01-19 爱唯思有限公司 For server failure transfer and the system and method for balancing the load
CN107666497A (en) * 2016-07-27 2018-02-06 北京京东尚科信息技术有限公司 Data access method and device
CN107800794A (en) * 2017-10-26 2018-03-13 广州市雷军游乐设备有限公司 The system for realizing platform safety stable operation
CN109274584A (en) * 2018-09-28 2019-01-25 乐蜜有限公司 Cut-in method, device, client device and the storage medium of access server
WO2020259598A1 (en) * 2019-06-27 2020-12-30 网联清算有限公司 Transaction data processing method, device, apparatus and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019243A1 (en) * 2000-06-15 2002-02-14 International Business Machines Corporation Short message gateway, system and method of providing information service for mobile telephones
CN101247349A (en) * 2008-03-13 2008-08-20 华耀环宇科技(北京)有限公司 Network flux fast distribution method
CN102263828A (en) * 2011-08-24 2011-11-30 北京蓝汛通信技术有限责任公司 Load balanced sharing method and equipment
CN103442030A (en) * 2013-07-31 2013-12-11 北京京东尚科信息技术有限公司 Method and system for sending and processing service request messages and client-side device
CN103580988A (en) * 2012-07-31 2014-02-12 阿里巴巴集团控股有限公司 Method for message receiving, pushing and transmitting, device, server group and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019243A1 (en) * 2000-06-15 2002-02-14 International Business Machines Corporation Short message gateway, system and method of providing information service for mobile telephones
CN101247349A (en) * 2008-03-13 2008-08-20 华耀环宇科技(北京)有限公司 Network flux fast distribution method
CN102263828A (en) * 2011-08-24 2011-11-30 北京蓝汛通信技术有限责任公司 Load balanced sharing method and equipment
CN103580988A (en) * 2012-07-31 2014-02-12 阿里巴巴集团控股有限公司 Method for message receiving, pushing and transmitting, device, server group and system
CN103442030A (en) * 2013-07-31 2013-12-11 北京京东尚科信息技术有限公司 Method and system for sending and processing service request messages and client-side device

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615734A (en) * 2015-05-26 2018-01-19 爱唯思有限公司 For server failure transfer and the system and method for balancing the load
CN107615734B (en) * 2015-05-26 2021-01-26 爱唯思有限公司 System and method for server failover and load balancing
CN107666497A (en) * 2016-07-27 2018-02-06 北京京东尚科信息技术有限公司 Data access method and device
CN107666497B (en) * 2016-07-27 2020-09-29 北京京东尚科信息技术有限公司 Data access method and device
CN107800794A (en) * 2017-10-26 2018-03-13 广州市雷军游乐设备有限公司 The system for realizing platform safety stable operation
CN109274584A (en) * 2018-09-28 2019-01-25 乐蜜有限公司 Cut-in method, device, client device and the storage medium of access server
CN109274584B (en) * 2018-09-28 2021-10-15 卓米私人有限公司 Access method and device for access server, client device and storage medium
WO2020259598A1 (en) * 2019-06-27 2020-12-30 网联清算有限公司 Transaction data processing method, device, apparatus and system

Also Published As

Publication number Publication date
CN104301439B (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN103312661B (en) A kind of service access method and device
CN104301439A (en) Load balancing method, device and system
CN103841134B (en) Based on API transmission, the method for receive information, apparatus and system
CN101677324B (en) Business management method, terminal, network system and related equipment
US10241876B1 (en) Cooperative fault tolerance and load balancing
CN101610222A (en) Client-based server selection method and device
CN107005435B (en) Network service descriptor shelving method and device
CN110308983A (en) Method for balancing resource load and system, service node and client
CN103209223A (en) Distributed application conversation information sharing method and system and application server
US20150195128A1 (en) Apparatus and method for supporting configuration management of virtual machine, and apparatus and method for brokering cloud service using the configuration management supporting apparatus
CN105939313A (en) State code redirecting method and device
US20130204926A1 (en) Information processing system, information processing device, client terminal, and computer readable medium
CN103647820A (en) Arbitration method and arbitration apparatus for distributed cluster systems
CN104657841A (en) Express item delivery method, delivery processing method, express cabinet terminal and service system
CN107172176A (en) APP method for connecting network, equipment and configuration server based on configuration management
CN110381131A (en) Implementation method, mobile terminal, server and the storage medium of MEC node identification
CN102231765A (en) Method and device for realizing load balance and set-top box
CN106230918A (en) A kind of method and device setting up connection
CN103647811B (en) A method and an apparatus for application's accessing backstage service
CN109474710B (en) Method and device for acquiring information
US7519855B2 (en) Method and system for distributing data processing units in a communication network
CN110012111B (en) Data service cluster system and data processing method
CN106657187A (en) Message processing method and apparatus thereof
CN102047642B (en) Method and device for storing online data
CN102437965A (en) Method and device for accessing target site

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Load balancing method, device and system of server

Effective date of registration: 20190531

Granted publication date: 20190226

Pledgee: Shenzhen Black Horse World Investment Consulting Co., Ltd.

Pledgor: Beijing Guoshuang Technology Co.,Ltd.

Registration number: 2019990000503

CP02 Change in the address of a patent holder

Address after: 100083 No. 401, 4th Floor, Haitai Building, 229 North Fourth Ring Road, Haidian District, Beijing

Patentee after: BEIJING GRIDSUM TECHNOLOGY Co.,Ltd.

Address before: 100086 Beijing city Haidian District Shuangyushu Area No. 76 Zhichun Road cuigongfandian 8 layer A

Patentee before: BEIJING GRIDSUM TECHNOLOGY Co.,Ltd.