Summary of the invention
In view of the above problems, the present invention has been proposed to a kind of server method of attachment and server system that overcomes the problems referred to above or address the above problem is at least in part provided.
According to one aspect of the present invention, a kind of server method of attachment is provided, for WEB server is connected to back-end server, the method comprises: whether Web server inspection this locality is cached with back-end server cluster-list; If there is no back-end server cluster-list described in buffer memory, described Web server is obtained described back-end server cluster-list and is carried out local cache by configuration server; If be cached with described back-end server cluster-list, described Web server obtains the back-end server cluster-list of buffer memory and from the back-end server cluster-list of described buffer memory, selects to specify back-end server; Described Web server directly connects described appointment back-end server.
Alternatively, described method also comprises: described Web server sends data processing request to described appointment back-end server, and waits for the result that described appointment back-end server returns.
Alternatively, when described back-end server cluster-list changes, described Web server is upgraded the server set group-list after changing and is upgraded local cache by described configuration server.
Alternatively, described configuration server obtains described back-end server cluster-list by transferring the application programming interface of described back-end server cluster.
Alternatively, described configuration server reads described application programming interface according to the time interval of setting and judges whether described back-end server cluster-list changes, and according to situation of change, upgrades the back-end server cluster-list of its acquisition if change.
Alternatively, described Web server selects the method for given server to adopt random selection from described back-end server cluster-list.
Alternatively, described Web server selects the method for given server to adopt load-balancing method from described back-end server cluster-list.
Alternatively, described load-balancing method comprises: described Web server calculates the load value of each back-end server in described back-end server cluster-list by formula below: the load value of back-end server=k1*cpu use amount+k2* processor performance+k3* internal memory surplus+k4* bandwidth resources; Wherein, described k1 is the corresponding weights of cpu use amount, and described k2 is the corresponding weights of processor performance, and described k3 is the corresponding weights of internal memory surplus, and described k4 is the corresponding weights of bandwidth resources; Described Web server selects the back-end server of load value minimum as specifying back-end server.
According to a further aspect in the invention, provide a kind of server system, this system comprises Web server and configuration server: described configuration server, for obtaining back-end server cluster-list; Described Web server, for checking whether this locality is cached with back-end server cluster-list; If there is no back-end server cluster-list described in buffer memory, obtain described back-end server cluster-list and carry out local cache from described configuration server; If be cached with described back-end server cluster-list, obtain the back-end server cluster-list of buffer memory and therefrom select and specify back-end server, directly connect described appointment back-end server.
Optionally, described Web server comprises that list acquisition module, list cache module, server choose module and server link block: described list acquisition module, be used for checking whether described Web server this locality is cached with back-end server cluster-list, if there is no back-end server cluster-list described in buffer memory, obtain described back-end server cluster-list by described configuration server; Described list cache module, carries out local cache for the described back-end server cluster-list that described list acquisition module is obtained; Described server is chosen module, for selecting to specify back-end server from the back-end server cluster-list of described list cache module institute buffer memory; Described server link block, for connecting described appointment back-end server.
Alternatively, described Web server also comprises data processing module, described data processing module is used for sending data processing request to described appointment back-end server, and waits for that the processing of described appointment back-end server finishes, and is back to described data processing module by result.
Alternatively, described configuration server obtains described back-end server cluster-list by transferring the application programming interface of described back-end server cluster.
Alternatively, described configuration server reads described application programming interface according to the time interval of setting and judges whether described back-end server cluster-list changes, and according to situation of change, upgrades the back-end server cluster-list of its acquisition if change.
According to a kind of server method of attachment of the embodiment of the present invention, can find out, the connection processing procedure of LVS has been omitted in the method for attachment of the disclosure based on configuration server, reduce thus the network service time in processing request process, solved the problem that rear Web server waits as long for of breaking of LVS in prior art.
Above-mentioned explanation is only the general introduction of technical solution of the present invention, in order to better understand technological means of the present invention, and can be implemented according to the content of specification, and for above and other objects of the present invention, feature and advantage can be become apparent, below especially exemplified by the specific embodiment of the present invention.
Embodiment
Exemplary embodiment of the present disclosure is described below with reference to accompanying drawings in more detail.Although shown exemplary embodiment of the present disclosure in accompanying drawing, yet should be appreciated that and can realize the disclosure and the embodiment that should do not set forth limits here with various forms.On the contrary, it is in order more thoroughly to understand the disclosure that these embodiment are provided, and can by the scope of the present disclosure complete convey to those skilled in the art.
As shown in Figure 2, when Web server needs and back-end server, while carrying out data communication such as the 2nd database in certain data-base cluster, first to check whether this locality is cached with back-end server cluster-list, if do not find the back-end server cluster-list of local cache, by configuration server, from back-end server cluster, obtain back-end server cluster-list, and this list is buffered in to this locality; When Web server obtains back-end server cluster-list from local cache, according to the database 2 of certain policy selection back-end server, Web server directly connects the database 2 of choosing, and to it, sends data processing request, and etc. pending end.Based on this principle configuration server used, it is for example zookeeper configuration server, Zookeeper is realization of increasing income of Chubby of Google, it is high effective and reliable cooperative operation system, in a distributed environment, need a Master example or store some configuration informations, guaranteeing consistency that file writes etc.Zookeeper be one distributed, the distributed application program coordination service of open source code, comprises a simple primitive collection, is the significant components of Hadoop and Hbase.Distributed Application can realize such as functions such as unified naming service, configuration management, distributed lock service, cluster managements with it.Each node in Zookeeper is called znode, and it has a unique ID of trace route path, znode be one with the similar node of Unix file system path, this node storage in the past or obtain data.If Flag is set to EPHEMERAL when creating znode, node and the Zookeeper as this znod creating loses after connection so, and this znode will no longer exist in Zookeeper.Zookeeper is used Watcher to discover event information, and when client is to event information, overtime such as connecting, node data changes, and child node changes, and can call corresponding behavior and carry out deal with data.The basic running flow process of ZooKeeper is: 1. elect Leader; 2. synchrodata; 3. elect algorithm in Leader process to have a lot, but the election standard that will reach is consistent; 4. Leader will have the highest zxid; 5. the Leader that in cluster, most machine meets with a response and follow selects.
The policy selection that is applicable to the principle of the invention can adopt load-balancing algorithm, that in load-balancing algorithm, commonly uses has a stochastic selection algorithm, stochastic selection algorithm is with random load being assigned on each available server of load-balancing method, by Generating Random Number, choose a server, then connection is sent to it, what also commonly use can be polling algorithm, polling algorithm is distributed to next server each new connection request in order, finally all requests are divided equally to all servers, it is good that polling algorithm is in most of the cases all worked, if but the equipment of load balancing is in processing speed, the aspect such as connection speed and internal memory is not complete equalization, effect can be better so, certainly, further, can use and relate to the Weighted Round Robin that number of connection is accepted in weight proportion distribution, also can use weighted value is continued to the dynamic polling algorithm of monitoring and constantly updating, what also commonly use is minimum join algorithm, and system is distributed to the minimum server of current linking number new connection.This algorithm is very effective in the substantially similar environment of each server operation ability.For example, system has 20 search engine servers, 1 director server, 1 standby director server and a web server at present, wherein, each search engine server is responsible for a part of search mission in general index, director server is responsible for sending searching request amalgamation result collection to these 20 search engine servers, and standby director server is responsible for automatic replacement director server when director server is delayed machine.As the CGI(Common of web server Gateway Interface) while sending searching request to director server, suppose that 15 servers in search engine server provide search service now, 5 search engine servers are just at generating indexes, how much use Zookeeper can guarantee that director server automatic sensing has provides the server of search engine and sends searching request to these servers, these 20 search engine servers often will allow is providing the server of search service to stop providing service to start generating indexes, or the server of the generating indexes derivation of having keeped vegetarian has become can search for to provide and has served.
Figure 3 shows that one embodiment of the present of invention, its principle is with consistent shown in Fig. 2, concrete, a kind of server method of attachment, and for WEB server is connected to back-end server, the method comprises:
Step S110, whether Web server inspection this locality is cached with back-end server cluster-list, if there is no buffer memory, performs step S120, if there is buffer memory, performs step S130;
Step S120, Web server obtains back-end server cluster-list and carries out local cache from back-end server cluster by configuration server, this configuration server is the configuration server based on zookeeper, by transferring the application programming interface (API of back-end server cluster, Application Programming Interface) obtain back-end server cluster-list, configuration server reads application programming interface (API) according to the time interval of setting and judges whether back-end server cluster-list changes, if change, according to situation of change, upgrade the back-end server cluster-list of its acquisition, and the back-end server cluster-list after upgrading is buffered in to this locality, certainly, configuration server also can manually be inputted or renewal back-end server cluster-list to it by keeper,
Step S130, Web server obtains the back-end server cluster-list of buffer memory and from the back-end server cluster-list of described buffer memory, selects to specify back-end server, this system of selection can adopt random selection, also can adopt load-balancing algorithm to select, load-balancing algorithm can be, Web server is by the load value of each back-end server in formula computational back-end server set group-list below:
The load value of back-end server=k1*cpu use amount+k2* processor performance+k3* internal memory surplus+k4* bandwidth resources;
Wherein, described k1 is the corresponding weights of cpu use amount, and described k2 is the corresponding weights of processor performance, and described k3 is the corresponding weights of internal memory surplus, described k4 is the corresponding weights of bandwidth resources, and these weights all can artificially be set as required; Web server selects the back-end server of load value minimum as specifying back-end server; Such as there being 3 databases in background server cluster-list, the load value=0.5+0.8+0.6+0.7=2.6 of database 1 after each parameter is used normalized; Load value=the 0.6+0.8+0.7+0.7=2.8 of database 2; Load value=the 0.8+0.7+0.3+0.7=2.5 of database 3, Web server selects the database 3 of load value minimum as specifying back-end server;
Step S140, Web server directly connects appointment back-end server;
Step S150, Web server sends data processing request and specifies back-end server to this, and etc. the result returned of back-end server to be specified.
By the present embodiment, can find, when Web server and back-end server need to carry out data processing exchange, in the situation that there is no back-end server cluster-list, can obtain back-end server cluster-list by zookeeper configuration server, thereby can being directly connected with back-end server, Web server asks to process, network processes link has been lacked one, do not need through intermediate link Web server can with back-end server Direct Communication, obviously improved treatment effeciency.In addition, in the situation that back-end server processing does not finish, Web server can be waited for the processing of back-end server, and there will not be to connect, be there is no the situation of perception by breaking.Use another benefit of zookeeper configuration server to be, when the change of back-end server cluster-list, zookeeper configuration server can obtain in time more fresh content and notify Web server to carry out the renewal of local cache.
Figure 4 shows that an alternative embodiment of the invention, is a kind of server system based on Fig. 2 principle, and this system comprises configuration server 310 and Web server 320, and wherein, configuration server 310 is for obtaining back-end server cluster-list; Web server 320 is for checking whether this locality is cached with back-end server cluster-list; If there is no buffer memory back-end server cluster-list, from configuration server 310, obtain back-end server cluster-list and carry out local cache; If be cached with back-end server cluster-list, obtain the back-end server cluster-list of buffer memory and therefrom select and specify back-end server, directly connect this appointment back-end server and carry out data communication.Configuration server 310 obtains back-end server cluster-list by transferring the application programming interface (API) of back-end server cluster, according to the time interval of setting, read application programming interface (API) and judge whether back-end server cluster-list changes, if change, according to situation of change, upgrade the back-end server cluster-list of its acquisition.
Further, Web server comprises that list acquisition module 321, list cache module 322, server choose module 323, server link block 324 and data processing module 325, wherein, list acquisition module 321 is for checking whether Web server 320 this locality are cached with back-end server cluster-list, if there is no buffer memory back-end server cluster-list, obtain back-end server cluster-list by configuration server 310; List cache module 322 carries out local cache for the back-end server cluster-list that list acquisition module 321 is obtained; Server is chosen module 323 and is selected to specify back-end server for the back-end server cluster-list from 322 buffer memorys of list cache module; Server link block 324 is for connecting appointment back-end server; Data processing module 325 is for sending data processing request to specifying back-end server, and etc. back-end server processing to be specified finish, result is back to data processing module.
It should be noted that, the algorithm that the embodiment of the present invention provides is intrinsic not relevant to any certain computer, virtual system or miscellaneous equipment with demonstration.Various general-purpose systems also can with based on using together with this teaching.According to description above, it is apparent constructing the desired structure of this type systematic.In addition, the present invention is not also for any certain programmed language.It should be understood that and can utilize various programming languages to realize content of the present invention described here, and the description of above language-specific being done is in order to disclose preferred forms of the present invention.
In the specification that provided herein, a large amount of details have been described.Yet, can understand, embodiments of the invention can not put into practice in the situation that there is no these details.In some instances, be not shown specifically known method, structure and technology, so that not fuzzy understanding of this description.
Similarly, be to be understood that, in order to simplify the disclosure and to help to understand one or more in each inventive aspect, in the above in the description of exemplary embodiment of the present invention, each feature of the present invention is grouped together into single embodiment, figure or sometimes in its description.Yet, the method for the disclosure should be construed to the following intention of reflection: the present invention for required protection requires than the more feature of feature of clearly recording in each claim.Or rather, as reflected in claims below, inventive aspect is to be less than all features of disclosed single embodiment above.Therefore, claims of following embodiment are incorporated to this embodiment thus clearly, and wherein each claim itself is as independent embodiment of the present invention.
Those skilled in the art are appreciated that and can the module in the equipment in embodiment are adaptively changed and they are arranged in one or more equipment different from this embodiment.Module in embodiment or unit or assembly can be combined into a module or unit or assembly, and can put them into a plurality of submodules or subelement or sub-component in addition.At least some in such feature and/or process or unit are mutually repelling, and can adopt any combination to combine all processes or the unit of disclosed all features in this specification (comprising claim, summary and the accompanying drawing followed) and disclosed any method like this or equipment.Unless clearly statement in addition, in this specification (comprising claim, summary and the accompanying drawing followed) disclosed each feature can be by providing identical, be equal to or the alternative features of similar object replaces.
It should be noted above-described embodiment the present invention will be described rather than limit the invention, and those skilled in the art can design alternative embodiment in the situation that do not depart from the scope of claims.In the claims, any reference symbol between bracket should be configured to limitations on claims.Word " comprises " not to be got rid of existence and is not listed as element or step in the claims.Being positioned at word " " before element or " one " does not get rid of and has a plurality of such elements.The present invention can be by means of including the hardware of some different elements and realizing by means of the computer of suitably programming.In having enumerated the unit claim of some devices, several in these devices can be to carry out imbody by same hardware branch.The use of word first, second and C grade does not represent any order.Can be title by these word explanations.