Content of the invention
For solving above-mentioned technical problem, the present invention provides a kind of server resource distribution method and device.
A kind of server resource distribution method, is applied to server node, the server node and multiple front end processor sections
Point communication connection, each described front end processor node is corresponding to different user's collection, the quantity of the front end processor node and the use
The quantity of family collection is identical, and corresponds;Methods described includes:
Determine server resource to be distributed;
According to the first weight of each front end processor node predetermined, will be pre- for the server resource to be distributed
Each described front end processor node is distributed to, so that each described front end processor node responds which corresponds to the resource Shen that user concentrates user
Please.
In a kind of specific embodiment of the present invention, methods described also includes:
Prescribing a time limit when distributing again for setting is being reached, is reclaiming the remaining server resource of each front end processor node;
Determine the second weight of each front end processor node;
According to the second weight of each front end processor node, the remaining server resource being recovered to is distributed to each again
The front end processor node.
In a kind of specific embodiment of the present invention, second weight for determining each front end processor node, bag
Include:
For front end processor node each described, user's application information for intention that the front end processor node is returned, the use is obtained
Resource bid number is carried in family application information for intention, wherein, user's application information for intention is for being distributed to the front end processor section in advance
After the completion of the server resource of point is applied for by user, the front end processor node is generated according to the untreated resource bid for receiving
Information;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
In a kind of specific embodiment of the present invention, each front end processor node is predefined by following steps
First weight:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in the history;
For front end processor node each described, according to multiple history weights of the front end processor node, the front end processor section is determined
First weight of point.
In a kind of specific embodiment of the present invention, the history weight of distribution is every time:According in this distribution procedure,
Each described front end processor node actual dispensed to user number of resources determine.
A kind of server resource dispensing device, is applied to server node, the server node and multiple front end processor sections
Point communication connection, each described front end processor node is corresponding to different user's collection, the quantity of the front end processor node and the use
The quantity of family collection is identical, and corresponds;Described device includes:
Server resource determining module, for determining server resource to be distributed;
The pre- distribution module of resource, for the first weight according to each front end processor node predetermined, will be described
Server resource to be distributed is distributed to each described front end processor node in advance, so that each described front end processor node responds its correspondence
User concentrates the resource bid of user.
In a kind of specific embodiment of the present invention, described device also includes:
Resource reclaim module, for reaching prescribing a time limit when distributing again for setting, reclaims the surplus of each front end processor node
Remaining server resource;
Second weight determination module, for determining the second weight of each front end processor node;
Resource distribution module again, for the second weight according to each front end processor node, by the residue clothes being recovered to
Business device resource is distributed to each described front end processor node again.
In a kind of specific embodiment of the present invention, second weight determination module, specifically for:
For front end processor node each described, user's application information for intention that the front end processor node is returned, the use is obtained
Resource bid number is carried in family application information for intention, wherein, user's application information for intention is for being distributed to the front end processor section in advance
After the completion of the server resource of point is applied for by user, the front end processor node is generated according to the untreated resource bid for receiving
Information;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
In a kind of specific embodiment of the present invention, described device also includes the first weight determination module, for passing through
Following steps predefine the first weight of each front end processor node:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in the history;
For front end processor node each described, according to multiple history weights of the front end processor node, the front end processor section is determined
First weight of point.
In a kind of specific embodiment of the present invention, the history weight of distribution is every time:According in this distribution procedure,
Each described front end processor node actual dispensed to user number of resources determine.
The technical scheme provided by the application embodiment of the present invention, server node is communicated to connect with multiple front end processor nodes,
After server node determines server resource to be distributed, according to the first weight of each front end processor node predetermined,
Server resource to be distributed is distributed to each front end processor node in advance, so, each front end processor node can respond its correspondence
User concentrates the resource bid of user, effectively the load pressure of server node has been distributed to each front end processor node, and
According to the first weight of each front end processor node, pre- Distributor resource, with higher fairness, reasonability.
Specific embodiment
The core of the present invention is to provide a kind of server resource distribution method, and the method is applied to server node.As Fig. 2
Shown, the server node is communicated to connect with multiple front end processor nodes, and each front end processor node collects corresponding to different users, front
The quantity for putting machine node is identical with the quantity that user collects, and corresponds.
In actual applications, user can be divided into different users and is concentrated, corresponding to each according to default rule
Individual user's collection, all configures a front end processor node.Such as, divide according to the geographical position residing for user, by server node pair
The region A for answering is divided into subregion A1 and subregion A2, configures front end processor node A1, joins in subregion A2 in subregion A1
Put front end processor node A2.User collection is respectively per sub-regions, is applied for putting forward machine node A1 per family in subregion A1
Corresponding server resource, with putting forward the corresponding server resource of machine node A2 application per family in subregion A2.
In embodiments of the present invention, server resource to be distributed is distributed to each front end processor section by server node in advance
Point, each front end processor node responds the resource bid that its corresponding user concentrates user.
As such, it is possible to when there is the concurrent resource bid of a large amount of concentrations, effectively alleviate the pressure of server node, improve whole
The availability of individual system.
In order that those skilled in the art more fully understand the present invention program, with reference to the accompanying drawings and detailed description
The present invention is described in further detail.Obviously, described embodiment is only a part of embodiment of the present invention, rather than
Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creative work premise
Lower obtained every other embodiment, belongs to the scope of protection of the invention.
Shown in Figure 3, a kind of implementing procedure figure of the server resource distribution method for being provided by the embodiment of the present invention,
The method may comprise steps of:
S110:Determine server resource to be distributed.
In embodiments of the present invention, when server node will carry out pre- distribution to server resource, can first determine and treat point
The server resource that sends out.
In actual applications, server node can be using currently all of server resource all as server to be distributed
Resource, can also reserve part of server resource according to setting ratio, by other services in addition to reserved server resource
Device resource is defined as server resource to be distributed.Reserved server resource can be used in setting time section, such as again
Use during distribution, or keep for front end processor node set in advance.
After server node determines server resource to be distributed, the operation of step S120 can be continued executing with.
S120:According to the first weight of each front end processor node predetermined, server resource to be distributed is divided in advance
Each front end processor node is issued, so that each front end processor node responds which corresponds to the resource bid that user concentrates user.
In embodiments of the present invention, at the beginning of pre- Distributor resource, before server node can predefine each
Put the first weight of machine node.According to the first weight of each front end processor node, server node can be by service to be distributed
Device resource is distributed to each front end processor node in advance, and so, each front end processor node has been owned by the server resource of respective numbers,
Which can be responded and correspond to the resource bid that user concentrates user.
In actual applications, server node can be that each front end processor node arranges the first weight of identical, be each
Front end processor node distributes the server resource of equal number in advance.
Or, server node can determine each front end processor section according to the corresponding user's ratio of each front end processor node
First weight of point.
In a kind of specific embodiment of the present invention, each front end processor node can be predefined by following steps
First weight:
Step one:Obtain history distribution data;
Step 2:Multiple history weights of each front end processor node of extracting data are distributed in history;
Step 3:For each front end processor node, according to multiple history weights of the front end processor node, the front end processor is determined
First weight of node.
For ease of description, above three step is combined and is illustrated.
When server node will carry out the pre- distribution of server resource, history distribution data can be first obtained.History is divided
It is related data of the server node when other times section carries out server resource distribution to send out data, wraps in history distribution data
History weight containing each corresponding front end processor node of distribution every time.
Server node can distribute multiple history weights of each front end processor node of extracting data in history.
For example, server node corresponds to two front end processor nodes, front end processor node 1 and front end processor node 2, server node
Multiple history weights of each the front end processor node for arriving in history distribution extracting data are as shown in table 1:
Time |
Front end processor node 1 |
Front end processor node 2 |
2012 |
50 |
50 |
2013 |
45 |
55 |
2014 |
20 |
80 |
2015 |
45 |
55 |
Table 1
For each front end processor node, according to multiple history weights of the front end processor node, it may be determined that the front end processor section
First weight of point.
Specifically, the first weight of each preposition node can be determined by following several method:
First method:Averaging method, that is, be directed to each front end processor node, multiple history weights of the front end processor node taken
Averagely, the meansigma methodss for obtaining are defined as the first weight of the front end processor node.
Such as:According to the history weight in table 1, the first weight of the front end processor node 1 of determination is:(50+45+20+45)/
4=40, the first weight of the front end processor node 2 of determination is:(50+55+80+55)/4=60.
Second method:Method, i.e., can embody the principle of data latest features, by set by the last time according to latest data recently
The weight that puts is used as current first weight.
By taking history weight shown in table 1 as an example, the weight 45 of front end processor node 1 in 2015 can be defined as current front end processor
First weight of node 1, the weight 55 of front end processor node 2 in 2015 is defined as the second weight of current front end processor node 2.
The third method:Scalping method, i.e., be analyzed to history weight, is averaging after rejecting abnormalities data again.
By taking history weight shown in table 1 as an example, 2014 annual datas substantially differ from other time data, can be by the time data
Reject, in order to avoid normal assay of the impact to history weight.History weight after rejecting abnormalities history weight is averaging, is obtained every
First weight of individual front end processor node, i.e.,:
((50+45+45)/3):((50+55+55)/3)≈47:First weight of 53, i.e. front end processor node 1 is 47, preposition
First weight of machine node 2 is 53.
4th kind of method:Revised law, i.e., be modified to abnormal data, and revised data are averaging.
Still by taking table 1 as an example, can be according to 2014 annual data of front end processor node corresponding user's ratio correction, the time is revised
Weight afterwards is respectively:50th, 50, then multiple history weights are averaging, the first weight of each front end processor node is obtained, i.e.,:
((50+45+50+45)/4):((50+55+50+55)/4)=48:First weight of 52, i.e. front end processor node 1 is
48, the first weight of front end processor node 2 is 52.
The method provided by the application embodiment of the present invention, server node is communicated to connect with multiple front end processor nodes, service
After device node determines server resource to be distributed, according to the first weight of each front end processor node predetermined, will treat
The server resource of distribution is distributed to each front end processor node in advance, and so, each front end processor node can respond which and correspond to user
The resource bid of user is concentrated, effectively the load pressure of server node has been distributed to each front end processor node, and according to
First weight of each front end processor node, pre- Distributor resource, with higher fairness, reasonability.
Shown in Figure 4, in one embodiment of the invention, server resource is distributed to each by server node in advance
After front end processor node, the method can also be comprised the following steps:
S130:Prescribing a time limit when distributing again for setting is being reached, is reclaiming the remaining server resource of each front end processor node.
In embodiments of the present invention, after server resource to be distributed to server node each front end processor node in advance, by
Each front end processor node responds which and corresponds to the resource bid that user concentrates user.Reaching prescribing a time limit when distributing again for setting, if
Certain or certain several front end processor nodes also have remaining server resource, i.e., do not concentrated user to apply for by its corresponding user,
Then server node can reclaim the remaining server resource of corresponding front end processor node.
S140:Determine the second weight of each front end processor node.
Second weight of each front end processor node can be identical with the first weight of the front end processor node, can also be without same.
In a kind of specific embodiment of the present invention, step S140 may comprise steps of:
First step:For each front end processor node, user's application information for intention that the front end processor node is returned is obtained,
Resource bid number is carried in user's application information for intention, wherein, user's application information for intention is for being distributed to the front end processor node in advance
Server resource applied for by user after the completion of, the letter that the front end processor node is generated according to the untreated resource bid that receives
Breath;
Second step:According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
For ease of description, above-mentioned two step is combined and is illustrated.
Before time limit arrival is distributed again, for each front end processor node, if being distributed to the service of the front end processor node in advance
Device resource has been applied for by the user of relative users collection, but the user concentrates and also has user to have sent resource to the front end processor node
Apply for, then the front end processor node can generate user's application information for intention according to the untreated resource bid for receiving, so as to
Distributing again for server resource is carried out when time limit arrival is distributed again.
In embodiments of the present invention, user's application information for intention can record content as shown in table 2:
Table 2
Wherein, purpose head Header starts for labelling purpose;Purpose content shows purpose specifying information, such as resource bid
Number, species etc.;User profile and front end processor nodal information show to apply for affiliated front end processor node and particular user, are easy to message
Passback.
Server node is according to the resource bid number of the front end processor node, it may be determined that the second power of the front end processor node
Weight.Specifically, second weight can be the resource bid total with all front end processor nodes of the resource bid number of the preposition node
The ratio of number.
S150:According to the second weight of each front end processor node, the remaining server resource being recovered to is distributed to again every
Individual front end processor node.
According to the second weight of each front end processor node, the remaining server resource being recovered to can be distributed to each again
Front end processor node.So, each front end processor node can proceed to respond to the resource bid that its relative users concentrates user, it is to avoid money
Source wastes.
It should be noted that for certain preposition node, if the front end processor node does not have corresponding user to apply for
Purpose, then can be set to 0 by the second weight of the front end processor node, be not preposition node Distributor resource again.
In actual applications, distributing again for server resource can carry out one or many, and the embodiment of the present invention is to this not
It is limited.
So far, the server resource of server node is distributed to each preposition through pre- distribution procedure and distribution procedure again
On machine node, the distribution of a server resource includes pre- distribution procedure and distribution procedure again.Set in pre- distribution procedure
Each front end processor node the first weight can not embody Current resource distribution practical situation, can be according to this distribution
During, each front end processor node actual dispensed determines this distribution each front end processor node corresponding to the number of resources of user
Final weight.
That is S1:S2:…:Si:…:Sn=p1:p2:…:pi:…:Pn, wherein, p1+p2+ ...+pn=100, Si are front
The total resources number that machine node i is distributed to user in pre- distribution procedure and again distribution procedure is put, pi is the final power of front end processor node i
Weight.
Determine the final weight for distributing each front end processor node corresponding every time, using as the first weight during subsequent distribution
History weight, is to determine that the first weight makes reference.
That is, each history weight of distribution, it can be each front end processor node reality according in this distribution procedure
Border is distributed to what the number of resources of user determined.
Corresponding to above method embodiment, the embodiment of the present invention additionally provides a kind of server resource dispensing device, should
Device is applied to server node, and server node is communicated to connect with multiple front end processor nodes, and each front end processor node is corresponded to
Different user's collection, the quantity that the quantity of front end processor node collects with user is identical, and corresponds.A kind of service described below
Device resource dissemination device can be mutually to should refer to a kind of above-described server resource distribution method.
Shown in Figure 5, the device can be included with lower module:
Server resource determining module 210, for determining server resource to be distributed;
The pre- distribution module 220 of resource, for the first weight according to each front end processor node predetermined, will be to be distributed
Server resource be distributed to each front end processor node in advance, so as to each front end processor node responds which and correspond to user and concentrate user
Resource bid.
The device provided by the application embodiment of the present invention, server node is communicated to connect with multiple front end processor nodes, service
After device node determines server resource to be distributed, according to the first weight of each front end processor node predetermined, will treat
The server resource of distribution is distributed to each front end processor node in advance, and so, each front end processor node can respond which and correspond to user
The resource bid of user is concentrated, effectively the load pressure of server node has been distributed to each front end processor node, and according to
First weight of each front end processor node, pre- Distributor resource, with higher fairness, reasonability.
Shown in Figure 6, in one embodiment of the invention, the device can also be included with lower module:
Resource reclaim module 230, for reaching prescribing a time limit when distributing again for setting, reclaims the residue of each front end processor node
Server resource;
Second weight determination module 240, for determining the second weight of each front end processor node;
Resource distribution module 250 again, for the second weight according to each front end processor node, by the residue service being recovered to
Device resource is distributed to each front end processor node again.
In a kind of specific embodiment of the present invention, the second weight determination module 240, specifically for:
For each front end processor node, user's application information for intention that the front end processor node is returned, user's application meaning is obtained
Resource bid number is carried in information, and wherein, user's application information for intention is the pre- server money for being distributed to the front end processor node
After the completion of source is applied for by user, the information that the front end processor node is generated according to the untreated resource bid for receiving;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
In a kind of specific embodiment of the present invention, the device also includes the first weight determination module, for pass through with
Lower step predefines the first weight of each front end processor node:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in history;
For each front end processor node, according to multiple history weights of the front end processor node, the front end processor node is determined
First weight.
In a kind of specific embodiment of the present invention, the history weight of distribution is every time:According in this distribution procedure,
Each front end processor node actual dispensed to user number of resources determine.
In this specification, each embodiment is described by the way of going forward one by one, and what each embodiment was stressed is and other
The difference of embodiment, between each embodiment same or similar part mutually referring to.Fill for disclosed in embodiment
For putting, as which corresponds to the method disclosed in Example, so description is fairly simple, related part is referring to method part
Illustrate.
Professional further appreciates that, in conjunction with the unit of each example of the embodiments described herein description
And algorithm steps, can with electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware and
The interchangeability of software, generally describes composition and the step of each example in the above description according to function.These
Function is executed with hardware or software mode actually, the application-specific depending on technical scheme and design constraint.Specialty
Technical staff can use different methods to realize described function to each specific application, but this realization should
Think beyond the scope of this invention.
The step of method for describing in conjunction with the embodiments described herein or algorithm, directly can be held with hardware, processor
The software module of row, or the combination of the two is implementing.Software module can be placed in random access memory (RAM), internal memory, read-only deposit
Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, depositor, hard disk, moveable magnetic disc, CD-ROM or technology
In any other form of storage medium well known in field.
Above a kind of server resource distribution method provided by the present invention and device are described in detail.Herein
Apply specific case to be set forth the principle of the present invention and embodiment, the explanation of above example is only intended to help
Understand the method for the present invention and its core concept.It should be pointed out that for those skilled in the art, do not taking off
On the premise of the principle of the invention, some improvement can also being carried out to the present invention and being modified, these improve and modification also falls into this
In invention scope of the claims.