CN106445683A - Method and device for distributing server resource - Google Patents

Method and device for distributing server resource Download PDF

Info

Publication number
CN106445683A
CN106445683A CN201610819480.3A CN201610819480A CN106445683A CN 106445683 A CN106445683 A CN 106445683A CN 201610819480 A CN201610819480 A CN 201610819480A CN 106445683 A CN106445683 A CN 106445683A
Authority
CN
China
Prior art keywords
end processor
processor node
resource
node
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610819480.3A
Other languages
Chinese (zh)
Other versions
CN106445683B (en
Inventor
徐秀敏
曹占峰
尹洪苓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Beijing Guodiantong Network Technology Co Ltd
Original Assignee
State Grid Corp of China SGCC
State Grid Information and Telecommunication Co Ltd
Beijing China Power Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, State Grid Information and Telecommunication Co Ltd, Beijing China Power Information Technology Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201610819480.3A priority Critical patent/CN106445683B/en
Publication of CN106445683A publication Critical patent/CN106445683A/en
Application granted granted Critical
Publication of CN106445683B publication Critical patent/CN106445683B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a method and device for distributing server resource. The method is applied in a server node which is connected with multiple front-end processor nodes in a communication mode; each front-end processor node is corresponding to a different user set; the number of the front-end processor nodes is identical to that of the user sets and the front-end processor nodes and the user sets are corresponding one-for-one; The method comprises following steps: determining server resource to be distributed; according to the pre-determined first weighting of each front-end processor node, pre-distributing server resource to be distributed to each front-end processor node so that each front-end processor node responds to a resource application of a user from a corresponding user set. By means of the technical scheme of the embodiment of the invention, each front-end processor node responds to a resource application of a user from a corresponding user set, which effectively distributes the load pressure of the server node to each front-end processor node; and according to the first weighting of each front-end processor node, server resource is pre-distributed, which has high fairness and rationality.

Description

A kind of server resource distribution method and device
Technical field
The present invention relates to Computer Applied Technology field, more particularly to a kind of server resource distribution method and device.
Background technology
Server resource, being with ageing a certain resource, if gone out without distribution in setting time section, inciting somebody to action Form the waste of resource.
In the prior art, Servers-all resource is placed in server node, after resource bid starts, is owned The resource bid of user will be concentrated within the same time period and be reached server node, process all concurrent moneys by server node Source is applied for, then result is distributed to user respectively that send application, as shown in Figure 1.
Server node will so be caused, and load pressure is overweight at short notice.The place of server node in this case Reason ability, network service etc. all very likely reach bottleneck, cause the server node machine of delaying cannot normal process user resource Application.
Content of the invention
For solving above-mentioned technical problem, the present invention provides a kind of server resource distribution method and device.
A kind of server resource distribution method, is applied to server node, the server node and multiple front end processor sections Point communication connection, each described front end processor node is corresponding to different user's collection, the quantity of the front end processor node and the use The quantity of family collection is identical, and corresponds;Methods described includes:
Determine server resource to be distributed;
According to the first weight of each front end processor node predetermined, will be pre- for the server resource to be distributed Each described front end processor node is distributed to, so that each described front end processor node responds which corresponds to the resource Shen that user concentrates user Please.
In a kind of specific embodiment of the present invention, methods described also includes:
Prescribing a time limit when distributing again for setting is being reached, is reclaiming the remaining server resource of each front end processor node;
Determine the second weight of each front end processor node;
According to the second weight of each front end processor node, the remaining server resource being recovered to is distributed to each again The front end processor node.
In a kind of specific embodiment of the present invention, second weight for determining each front end processor node, bag Include:
For front end processor node each described, user's application information for intention that the front end processor node is returned, the use is obtained Resource bid number is carried in family application information for intention, wherein, user's application information for intention is for being distributed to the front end processor section in advance After the completion of the server resource of point is applied for by user, the front end processor node is generated according to the untreated resource bid for receiving Information;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
In a kind of specific embodiment of the present invention, each front end processor node is predefined by following steps First weight:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in the history;
For front end processor node each described, according to multiple history weights of the front end processor node, the front end processor section is determined First weight of point.
In a kind of specific embodiment of the present invention, the history weight of distribution is every time:According in this distribution procedure, Each described front end processor node actual dispensed to user number of resources determine.
A kind of server resource dispensing device, is applied to server node, the server node and multiple front end processor sections Point communication connection, each described front end processor node is corresponding to different user's collection, the quantity of the front end processor node and the use The quantity of family collection is identical, and corresponds;Described device includes:
Server resource determining module, for determining server resource to be distributed;
The pre- distribution module of resource, for the first weight according to each front end processor node predetermined, will be described Server resource to be distributed is distributed to each described front end processor node in advance, so that each described front end processor node responds its correspondence User concentrates the resource bid of user.
In a kind of specific embodiment of the present invention, described device also includes:
Resource reclaim module, for reaching prescribing a time limit when distributing again for setting, reclaims the surplus of each front end processor node Remaining server resource;
Second weight determination module, for determining the second weight of each front end processor node;
Resource distribution module again, for the second weight according to each front end processor node, by the residue clothes being recovered to Business device resource is distributed to each described front end processor node again.
In a kind of specific embodiment of the present invention, second weight determination module, specifically for:
For front end processor node each described, user's application information for intention that the front end processor node is returned, the use is obtained Resource bid number is carried in family application information for intention, wherein, user's application information for intention is for being distributed to the front end processor section in advance After the completion of the server resource of point is applied for by user, the front end processor node is generated according to the untreated resource bid for receiving Information;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
In a kind of specific embodiment of the present invention, described device also includes the first weight determination module, for passing through Following steps predefine the first weight of each front end processor node:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in the history;
For front end processor node each described, according to multiple history weights of the front end processor node, the front end processor section is determined First weight of point.
In a kind of specific embodiment of the present invention, the history weight of distribution is every time:According in this distribution procedure, Each described front end processor node actual dispensed to user number of resources determine.
The technical scheme provided by the application embodiment of the present invention, server node is communicated to connect with multiple front end processor nodes, After server node determines server resource to be distributed, according to the first weight of each front end processor node predetermined, Server resource to be distributed is distributed to each front end processor node in advance, so, each front end processor node can respond its correspondence User concentrates the resource bid of user, effectively the load pressure of server node has been distributed to each front end processor node, and According to the first weight of each front end processor node, pre- Distributor resource, with higher fairness, reasonability.
Description of the drawings
For the clearer explanation embodiment of the present invention or the technical scheme of prior art, below will be to embodiment or existing Accompanying drawing to be used needed for technology description is briefly described, it should be apparent that, drawings in the following description are only this Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, can also root Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the structural representation of resource bid system in prior art;
Fig. 2 is the structural representation of resource bid system in the embodiment of the present invention;
Fig. 3 is a kind of implementing procedure figure of server resource distribution method in the embodiment of the present invention;
Fig. 4 is another kind of implementing procedure figure of server resource distribution method in the embodiment of the present invention;
Fig. 5 is a kind of structural representation of server resource dispensing device in the embodiment of the present invention;
Fig. 6 is another kind of structural representation of server resource dispensing device in the embodiment of the present invention.
Specific embodiment
The core of the present invention is to provide a kind of server resource distribution method, and the method is applied to server node.As Fig. 2 Shown, the server node is communicated to connect with multiple front end processor nodes, and each front end processor node collects corresponding to different users, front The quantity for putting machine node is identical with the quantity that user collects, and corresponds.
In actual applications, user can be divided into different users and is concentrated, corresponding to each according to default rule Individual user's collection, all configures a front end processor node.Such as, divide according to the geographical position residing for user, by server node pair The region A for answering is divided into subregion A1 and subregion A2, configures front end processor node A1, joins in subregion A2 in subregion A1 Put front end processor node A2.User collection is respectively per sub-regions, is applied for putting forward machine node A1 per family in subregion A1 Corresponding server resource, with putting forward the corresponding server resource of machine node A2 application per family in subregion A2.
In embodiments of the present invention, server resource to be distributed is distributed to each front end processor section by server node in advance Point, each front end processor node responds the resource bid that its corresponding user concentrates user.
As such, it is possible to when there is the concurrent resource bid of a large amount of concentrations, effectively alleviate the pressure of server node, improve whole The availability of individual system.
In order that those skilled in the art more fully understand the present invention program, with reference to the accompanying drawings and detailed description The present invention is described in further detail.Obviously, described embodiment is only a part of embodiment of the present invention, rather than Whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creative work premise Lower obtained every other embodiment, belongs to the scope of protection of the invention.
Shown in Figure 3, a kind of implementing procedure figure of the server resource distribution method for being provided by the embodiment of the present invention, The method may comprise steps of:
S110:Determine server resource to be distributed.
In embodiments of the present invention, when server node will carry out pre- distribution to server resource, can first determine and treat point The server resource that sends out.
In actual applications, server node can be using currently all of server resource all as server to be distributed Resource, can also reserve part of server resource according to setting ratio, by other services in addition to reserved server resource Device resource is defined as server resource to be distributed.Reserved server resource can be used in setting time section, such as again Use during distribution, or keep for front end processor node set in advance.
After server node determines server resource to be distributed, the operation of step S120 can be continued executing with.
S120:According to the first weight of each front end processor node predetermined, server resource to be distributed is divided in advance Each front end processor node is issued, so that each front end processor node responds which corresponds to the resource bid that user concentrates user.
In embodiments of the present invention, at the beginning of pre- Distributor resource, before server node can predefine each Put the first weight of machine node.According to the first weight of each front end processor node, server node can be by service to be distributed Device resource is distributed to each front end processor node in advance, and so, each front end processor node has been owned by the server resource of respective numbers, Which can be responded and correspond to the resource bid that user concentrates user.
In actual applications, server node can be that each front end processor node arranges the first weight of identical, be each Front end processor node distributes the server resource of equal number in advance.
Or, server node can determine each front end processor section according to the corresponding user's ratio of each front end processor node First weight of point.
In a kind of specific embodiment of the present invention, each front end processor node can be predefined by following steps First weight:
Step one:Obtain history distribution data;
Step 2:Multiple history weights of each front end processor node of extracting data are distributed in history;
Step 3:For each front end processor node, according to multiple history weights of the front end processor node, the front end processor is determined First weight of node.
For ease of description, above three step is combined and is illustrated.
When server node will carry out the pre- distribution of server resource, history distribution data can be first obtained.History is divided It is related data of the server node when other times section carries out server resource distribution to send out data, wraps in history distribution data History weight containing each corresponding front end processor node of distribution every time.
Server node can distribute multiple history weights of each front end processor node of extracting data in history.
For example, server node corresponds to two front end processor nodes, front end processor node 1 and front end processor node 2, server node Multiple history weights of each the front end processor node for arriving in history distribution extracting data are as shown in table 1:
Time Front end processor node 1 Front end processor node 2
2012 50 50
2013 45 55
2014 20 80
2015 45 55
Table 1
For each front end processor node, according to multiple history weights of the front end processor node, it may be determined that the front end processor section First weight of point.
Specifically, the first weight of each preposition node can be determined by following several method:
First method:Averaging method, that is, be directed to each front end processor node, multiple history weights of the front end processor node taken Averagely, the meansigma methodss for obtaining are defined as the first weight of the front end processor node.
Such as:According to the history weight in table 1, the first weight of the front end processor node 1 of determination is:(50+45+20+45)/ 4=40, the first weight of the front end processor node 2 of determination is:(50+55+80+55)/4=60.
Second method:Method, i.e., can embody the principle of data latest features, by set by the last time according to latest data recently The weight that puts is used as current first weight.
By taking history weight shown in table 1 as an example, the weight 45 of front end processor node 1 in 2015 can be defined as current front end processor First weight of node 1, the weight 55 of front end processor node 2 in 2015 is defined as the second weight of current front end processor node 2.
The third method:Scalping method, i.e., be analyzed to history weight, is averaging after rejecting abnormalities data again.
By taking history weight shown in table 1 as an example, 2014 annual datas substantially differ from other time data, can be by the time data Reject, in order to avoid normal assay of the impact to history weight.History weight after rejecting abnormalities history weight is averaging, is obtained every First weight of individual front end processor node, i.e.,:
((50+45+45)/3):((50+55+55)/3)≈47:First weight of 53, i.e. front end processor node 1 is 47, preposition First weight of machine node 2 is 53.
4th kind of method:Revised law, i.e., be modified to abnormal data, and revised data are averaging.
Still by taking table 1 as an example, can be according to 2014 annual data of front end processor node corresponding user's ratio correction, the time is revised Weight afterwards is respectively:50th, 50, then multiple history weights are averaging, the first weight of each front end processor node is obtained, i.e.,:
((50+45+50+45)/4):((50+55+50+55)/4)=48:First weight of 52, i.e. front end processor node 1 is 48, the first weight of front end processor node 2 is 52.
The method provided by the application embodiment of the present invention, server node is communicated to connect with multiple front end processor nodes, service After device node determines server resource to be distributed, according to the first weight of each front end processor node predetermined, will treat The server resource of distribution is distributed to each front end processor node in advance, and so, each front end processor node can respond which and correspond to user The resource bid of user is concentrated, effectively the load pressure of server node has been distributed to each front end processor node, and according to First weight of each front end processor node, pre- Distributor resource, with higher fairness, reasonability.
Shown in Figure 4, in one embodiment of the invention, server resource is distributed to each by server node in advance After front end processor node, the method can also be comprised the following steps:
S130:Prescribing a time limit when distributing again for setting is being reached, is reclaiming the remaining server resource of each front end processor node.
In embodiments of the present invention, after server resource to be distributed to server node each front end processor node in advance, by Each front end processor node responds which and corresponds to the resource bid that user concentrates user.Reaching prescribing a time limit when distributing again for setting, if Certain or certain several front end processor nodes also have remaining server resource, i.e., do not concentrated user to apply for by its corresponding user, Then server node can reclaim the remaining server resource of corresponding front end processor node.
S140:Determine the second weight of each front end processor node.
Second weight of each front end processor node can be identical with the first weight of the front end processor node, can also be without same.
In a kind of specific embodiment of the present invention, step S140 may comprise steps of:
First step:For each front end processor node, user's application information for intention that the front end processor node is returned is obtained, Resource bid number is carried in user's application information for intention, wherein, user's application information for intention is for being distributed to the front end processor node in advance Server resource applied for by user after the completion of, the letter that the front end processor node is generated according to the untreated resource bid that receives Breath;
Second step:According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
For ease of description, above-mentioned two step is combined and is illustrated.
Before time limit arrival is distributed again, for each front end processor node, if being distributed to the service of the front end processor node in advance Device resource has been applied for by the user of relative users collection, but the user concentrates and also has user to have sent resource to the front end processor node Apply for, then the front end processor node can generate user's application information for intention according to the untreated resource bid for receiving, so as to Distributing again for server resource is carried out when time limit arrival is distributed again.
In embodiments of the present invention, user's application information for intention can record content as shown in table 2:
Table 2
Wherein, purpose head Header starts for labelling purpose;Purpose content shows purpose specifying information, such as resource bid Number, species etc.;User profile and front end processor nodal information show to apply for affiliated front end processor node and particular user, are easy to message Passback.
Server node is according to the resource bid number of the front end processor node, it may be determined that the second power of the front end processor node Weight.Specifically, second weight can be the resource bid total with all front end processor nodes of the resource bid number of the preposition node The ratio of number.
S150:According to the second weight of each front end processor node, the remaining server resource being recovered to is distributed to again every Individual front end processor node.
According to the second weight of each front end processor node, the remaining server resource being recovered to can be distributed to each again Front end processor node.So, each front end processor node can proceed to respond to the resource bid that its relative users concentrates user, it is to avoid money Source wastes.
It should be noted that for certain preposition node, if the front end processor node does not have corresponding user to apply for Purpose, then can be set to 0 by the second weight of the front end processor node, be not preposition node Distributor resource again.
In actual applications, distributing again for server resource can carry out one or many, and the embodiment of the present invention is to this not It is limited.
So far, the server resource of server node is distributed to each preposition through pre- distribution procedure and distribution procedure again On machine node, the distribution of a server resource includes pre- distribution procedure and distribution procedure again.Set in pre- distribution procedure Each front end processor node the first weight can not embody Current resource distribution practical situation, can be according to this distribution During, each front end processor node actual dispensed determines this distribution each front end processor node corresponding to the number of resources of user Final weight.
That is S1:S2:…:Si:…:Sn=p1:p2:…:pi:…:Pn, wherein, p1+p2+ ...+pn=100, Si are front The total resources number that machine node i is distributed to user in pre- distribution procedure and again distribution procedure is put, pi is the final power of front end processor node i Weight.
Determine the final weight for distributing each front end processor node corresponding every time, using as the first weight during subsequent distribution History weight, is to determine that the first weight makes reference.
That is, each history weight of distribution, it can be each front end processor node reality according in this distribution procedure Border is distributed to what the number of resources of user determined.
Corresponding to above method embodiment, the embodiment of the present invention additionally provides a kind of server resource dispensing device, should Device is applied to server node, and server node is communicated to connect with multiple front end processor nodes, and each front end processor node is corresponded to Different user's collection, the quantity that the quantity of front end processor node collects with user is identical, and corresponds.A kind of service described below Device resource dissemination device can be mutually to should refer to a kind of above-described server resource distribution method.
Shown in Figure 5, the device can be included with lower module:
Server resource determining module 210, for determining server resource to be distributed;
The pre- distribution module 220 of resource, for the first weight according to each front end processor node predetermined, will be to be distributed Server resource be distributed to each front end processor node in advance, so as to each front end processor node responds which and correspond to user and concentrate user Resource bid.
The device provided by the application embodiment of the present invention, server node is communicated to connect with multiple front end processor nodes, service After device node determines server resource to be distributed, according to the first weight of each front end processor node predetermined, will treat The server resource of distribution is distributed to each front end processor node in advance, and so, each front end processor node can respond which and correspond to user The resource bid of user is concentrated, effectively the load pressure of server node has been distributed to each front end processor node, and according to First weight of each front end processor node, pre- Distributor resource, with higher fairness, reasonability.
Shown in Figure 6, in one embodiment of the invention, the device can also be included with lower module:
Resource reclaim module 230, for reaching prescribing a time limit when distributing again for setting, reclaims the residue of each front end processor node Server resource;
Second weight determination module 240, for determining the second weight of each front end processor node;
Resource distribution module 250 again, for the second weight according to each front end processor node, by the residue service being recovered to Device resource is distributed to each front end processor node again.
In a kind of specific embodiment of the present invention, the second weight determination module 240, specifically for:
For each front end processor node, user's application information for intention that the front end processor node is returned, user's application meaning is obtained Resource bid number is carried in information, and wherein, user's application information for intention is the pre- server money for being distributed to the front end processor node After the completion of source is applied for by user, the information that the front end processor node is generated according to the untreated resource bid for receiving;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
In a kind of specific embodiment of the present invention, the device also includes the first weight determination module, for pass through with Lower step predefines the first weight of each front end processor node:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in history;
For each front end processor node, according to multiple history weights of the front end processor node, the front end processor node is determined First weight.
In a kind of specific embodiment of the present invention, the history weight of distribution is every time:According in this distribution procedure, Each front end processor node actual dispensed to user number of resources determine.
In this specification, each embodiment is described by the way of going forward one by one, and what each embodiment was stressed is and other The difference of embodiment, between each embodiment same or similar part mutually referring to.Fill for disclosed in embodiment For putting, as which corresponds to the method disclosed in Example, so description is fairly simple, related part is referring to method part Illustrate.
Professional further appreciates that, in conjunction with the unit of each example of the embodiments described herein description And algorithm steps, can with electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware and The interchangeability of software, generally describes composition and the step of each example in the above description according to function.These Function is executed with hardware or software mode actually, the application-specific depending on technical scheme and design constraint.Specialty Technical staff can use different methods to realize described function to each specific application, but this realization should Think beyond the scope of this invention.
The step of method for describing in conjunction with the embodiments described herein or algorithm, directly can be held with hardware, processor The software module of row, or the combination of the two is implementing.Software module can be placed in random access memory (RAM), internal memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, depositor, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
Above a kind of server resource distribution method provided by the present invention and device are described in detail.Herein Apply specific case to be set forth the principle of the present invention and embodiment, the explanation of above example is only intended to help Understand the method for the present invention and its core concept.It should be pointed out that for those skilled in the art, do not taking off On the premise of the principle of the invention, some improvement can also being carried out to the present invention and being modified, these improve and modification also falls into this In invention scope of the claims.

Claims (10)

1. a kind of server resource distribution method, it is characterised in that be applied to server node, the server node with multiple Front end processor node is communicated to connect, and each described front end processor node collects corresponding to different users, the quantity of the front end processor node Identical with the quantity of user collection, and correspond;Methods described includes:
Determine server resource to be distributed;
According to the first weight of each front end processor node predetermined, the server resource to be distributed is distributed in advance To front end processor node each described, so that each described front end processor node responds which corresponds to the resource bid that user concentrates user.
2. method according to claim 1, it is characterised in that methods described also includes:
Prescribing a time limit when distributing again for setting is being reached, is reclaiming the remaining server resource of each front end processor node;
Determine the second weight of each front end processor node;
According to the second weight of each front end processor node, the remaining server resource being recovered to is distributed to described in each again Front end processor node.
3. method according to claim 2, it is characterised in that the second power of each front end processor node of the determination Weight, including:
For front end processor node each described, user's application information for intention that the front end processor node is returned, the user Shen is obtained Resource bid number please be carried in information for intention, and wherein, user's application information for intention is pre- to be distributed to the front end processor node After the completion of server resource is applied for by user, the letter that the front end processor node is generated according to the untreated resource bid for receiving Breath;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
4. the method according to any one of claims 1 to 3, it is characterised in that predefine each institute by following steps State the first weight of front end processor node:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in the history;
For front end processor node each described, according to multiple history weights of the front end processor node, the front end processor node is determined First weight.
5. method according to claim 4, it is characterised in that the history weight of distribution is every time:Distributed according to this time Cheng Zhong, each described front end processor node actual dispensed to user number of resources determine.
6. a kind of server resource dispensing device, it is characterised in that be applied to server node, the server node with multiple Front end processor node is communicated to connect, and each described front end processor node collects corresponding to different users, the quantity of the front end processor node Identical with the quantity of user collection, and correspond;Described device includes:
Server resource determining module, for determining server resource to be distributed;
The pre- distribution module of resource, for the first weight according to each front end processor node predetermined, treats described point The server resource that sends out is distributed to each described front end processor node in advance, so that each described front end processor node responds which corresponds to user Concentrate the resource bid of user.
7. device according to claim 6, it is characterised in that described device also includes:
Resource reclaim module, for reaching prescribing a time limit when distributing again for setting, reclaims the remaining clothes of each front end processor node Business device resource;
Second weight determination module, for determining the second weight of each front end processor node;
Resource distribution module again, for the second weight according to each front end processor node, by the remaining server being recovered to Resource is distributed to each described front end processor node again.
8. device according to claim 7, it is characterised in that second weight determination module, specifically for:
For front end processor node each described, user's application information for intention that the front end processor node is returned, the user Shen is obtained Resource bid number please be carried in information for intention, and wherein, user's application information for intention is pre- to be distributed to the front end processor node After the completion of server resource is applied for by user, the letter that the front end processor node is generated according to the untreated resource bid for receiving Breath;
According to the resource bid number of the front end processor node, the second weight of the front end processor node is determined.
9. the device according to any one of claim 6 to 8, it is characterised in that described device also includes that the first weight determines Module, for predefining the first weight of each front end processor node by following steps:
Obtain the history distribution data for repeatedly distribution;
Multiple history weights of each front end processor node of extracting data are distributed in the history;
For front end processor node each described, according to multiple history weights of the front end processor node, the front end processor node is determined First weight.
10. device according to claim 9, it is characterised in that the history weight of distribution is every time:Distributed according to this time Cheng Zhong, each described front end processor node actual dispensed to user number of resources determine.
CN201610819480.3A 2016-09-12 2016-09-12 A kind of server resource distribution method and device Active CN106445683B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610819480.3A CN106445683B (en) 2016-09-12 2016-09-12 A kind of server resource distribution method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610819480.3A CN106445683B (en) 2016-09-12 2016-09-12 A kind of server resource distribution method and device

Publications (2)

Publication Number Publication Date
CN106445683A true CN106445683A (en) 2017-02-22
CN106445683B CN106445683B (en) 2019-12-03

Family

ID=58167807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610819480.3A Active CN106445683B (en) 2016-09-12 2016-09-12 A kind of server resource distribution method and device

Country Status (1)

Country Link
CN (1) CN106445683B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144726A (en) * 2018-08-09 2019-01-04 深圳市瑞云科技有限公司 A method of by group come scheduling node machine
CN112115202A (en) * 2020-09-18 2020-12-22 北京人大金仓信息技术股份有限公司 Task distribution method and device in cluster environment
CN114928604A (en) * 2022-06-29 2022-08-19 建信金融科技有限责任公司 File distribution method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441580A (en) * 2008-12-09 2009-05-27 华北电网有限公司 Distributed paralleling calculation platform system and calculation task allocating method thereof
CN101534244A (en) * 2009-02-09 2009-09-16 华为技术有限公司 Method, device and system for load distribution
US20100121855A1 (en) * 2003-06-25 2010-05-13 Microsoft Corporation Lookup Partitioning Storage System and Method
CN103713956A (en) * 2014-01-06 2014-04-09 山东大学 Method for intelligent weighing load balance in cloud computing virtualized management environment
CN104702710A (en) * 2013-12-09 2015-06-10 中国联合网络通信集团有限公司 Port allocation method and device
CN105049225A (en) * 2015-06-05 2015-11-11 江苏国电南自海吉科技有限公司 Power system front-end processor system based on dynamic role weight
CN105516746A (en) * 2014-10-14 2016-04-20 鸿富锦精密工业(深圳)有限公司 Video pre-downloading system and method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100121855A1 (en) * 2003-06-25 2010-05-13 Microsoft Corporation Lookup Partitioning Storage System and Method
CN101441580A (en) * 2008-12-09 2009-05-27 华北电网有限公司 Distributed paralleling calculation platform system and calculation task allocating method thereof
CN101534244A (en) * 2009-02-09 2009-09-16 华为技术有限公司 Method, device and system for load distribution
CN104702710A (en) * 2013-12-09 2015-06-10 中国联合网络通信集团有限公司 Port allocation method and device
CN103713956A (en) * 2014-01-06 2014-04-09 山东大学 Method for intelligent weighing load balance in cloud computing virtualized management environment
CN105516746A (en) * 2014-10-14 2016-04-20 鸿富锦精密工业(深圳)有限公司 Video pre-downloading system and method
CN105049225A (en) * 2015-06-05 2015-11-11 江苏国电南自海吉科技有限公司 Power system front-end processor system based on dynamic role weight

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144726A (en) * 2018-08-09 2019-01-04 深圳市瑞云科技有限公司 A method of by group come scheduling node machine
CN112115202A (en) * 2020-09-18 2020-12-22 北京人大金仓信息技术股份有限公司 Task distribution method and device in cluster environment
CN114928604A (en) * 2022-06-29 2022-08-19 建信金融科技有限责任公司 File distribution method and device
CN114928604B (en) * 2022-06-29 2023-06-16 建信金融科技有限责任公司 File distribution method and device

Also Published As

Publication number Publication date
CN106445683B (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN104966214B (en) A kind of exchange method and device of electronic ticket
CN108881495A (en) Resource allocation methods, device, computer equipment and storage medium
CN106445683A (en) Method and device for distributing server resource
CN107317712A (en) A kind of creation method and device of network section
CN111356103B (en) Flow quota distribution method and device, server and computer storage medium
CN106815254A (en) A kind of data processing method and device
CN106257893A (en) Storage server task response method, client, server and system
CN108111628A (en) A kind of dynamic capacity-expanding storage method and system
CN111768174A (en) Activity management method, apparatus, device and medium
CN107577700A (en) The processing method and processing device of database disaster tolerance
CN107067187A (en) Telephony task management method, storage device, storage medium and device
CN109285015B (en) Virtual resource allocation method and system
CN107682578A (en) A kind of control method and system of multiple terminals shared service
CN109582829B (en) Processing method, device, equipment and readable storage medium
CN106779924A (en) A kind of cloud platform request for product form processing method under Multistage Proxy pattern
CN106815724A (en) Charging pile and charging method
CN109801153A (en) Syndicated loan method and relevant apparatus based on cloud monitoring
CN105094947B (en) The quota management method and system of a kind of virtual computing resource
CN108629674A (en) Auction the distribution method and terminal device of income
CN106874069A (en) A kind of resources of virtual machine distribution method and device
CN108665177A (en) Resource allocation methods and device
CN112037019B (en) Method and device for allocating self-service equipment of banking outlets
CN103731496B (en) The treating method and apparatus of service request
CN105071498B (en) The charging method and device of electric car
CN106411782B (en) A kind of bandwidth compensation method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100192 Beijing city Haidian District Qinghe small Camp Road No. 15 building 710 room research

Applicant after: BEIJING CHINA POWER INFORMATION TECHNOLOGY Co.,Ltd.

Applicant after: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Applicant after: STATE GRID CORPORATION OF CHINA

Address before: 100192 Beijing city Haidian District Qinghe small Camp Road No. 15 building 710 room research

Applicant before: BEIJING CHINA POWER INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Applicant before: State Grid Corporation of China

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190717

Address after: 100085 Building 32-3-4108-4109, Pioneer Road, Haidian District, Beijing

Applicant after: BEIJING GUODIANTONG NETWORK TECHNOLOGY Co.,Ltd.

Applicant after: STATE GRID CORPORATION OF CHINA

Applicant after: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Address before: 100192 Beijing city Haidian District Qinghe small Camp Road No. 15 building 710 room research

Applicant before: BEIJING CHINA POWER INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: STATE GRID INFORMATION & TELECOMMUNICATION GROUP Co.,Ltd.

Applicant before: STATE GRID CORPORATION OF CHINA

GR01 Patent grant
GR01 Patent grant