CN100542139C - A kind of resource allocation methods and device based on the task grouping - Google Patents

A kind of resource allocation methods and device based on the task grouping Download PDF

Info

Publication number
CN100542139C
CN100542139C CNB2006101564647A CN200610156464A CN100542139C CN 100542139 C CN100542139 C CN 100542139C CN B2006101564647 A CNB2006101564647 A CN B2006101564647A CN 200610156464 A CN200610156464 A CN 200610156464A CN 100542139 C CN100542139 C CN 100542139C
Authority
CN
China
Prior art keywords
task
resources
grouping
resource
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006101564647A
Other languages
Chinese (zh)
Other versions
CN101009642A (en
Inventor
任艳花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB2006101564647A priority Critical patent/CN100542139C/en
Publication of CN101009642A publication Critical patent/CN101009642A/en
Application granted granted Critical
Publication of CN100542139C publication Critical patent/CN100542139C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of resource allocation methods based on the task grouping, at first receiving of task is divided into groups to obtain different task groupings, determine the expectation number of resources of different task grouping then, and be different task packet allocation resources according to described expectation number of resources, make the task of different grouping obtain having equal opportunities of resource service.The invention also discloses a kind of resource allocation device based on the task grouping, comprise task grouping module and administration module, the task grouping module is used for task is divided into groups, administration module is determined the expectation number of resources of different task grouping, and is the different task packet allocation resource that the task grouping module is divided according to described expectation number of resources.This device makes the task of different grouping obtain having equal opportunities of resource service.

Description

A kind of resource allocation methods and device based on the task grouping
Technical field
The present invention relates to resource allocation techniques, refer to a kind of resource allocation methods and device especially based on the task grouping.
Background technology
In the server program based on client/server (C/S) pattern, reasonably the resource of distribution server end can make server program can serve client better.The resource of server end comprises: thread, processor, internal memory, bandwidth or the like.
Thread is the control flows of certain single order in the process.Compare with creating a process, create a thread and will expend much smaller system resource,, use thread to obtain more performance than use process for the many especially application of those concurrent process numbers.
In order to improve the utilization rate of processor, multithreading has appearred.Multithreading has solved the problem of the concurrent execution of a plurality of threads in the processor unit, can significantly reduce the standby time of processor unit, increases the handling capacity of processor unit.But the establishment of thread and destroy and all to need certain timeslice resource that takies processor so if when busy, create and destroy thread in system continually, can increase the processing time of individual task, can influence the performance of server program on the contrary.
Destroy the influence of time in order to reduce thread creation time and thread, the thread pool technology occurred server performance.This technology of thread pool is arranged in thread creation time and the thread time of destroying respectively startup and the time of end or other the free time section of server program.After creating the thread of some quantity, allow these threads all be in idle condition, when client has a new task, wake the some idle threads in the thread pool up, allow it handle this task of client, after handling this task, thread is in idle condition again.Like this, when server program is handled the request that client sends, thread creation and the expense of the time of destruction have not just been had again.
At present, thread pool to the scheduling of client task is: as long as idle thread is arranged, with regard to successively from task queue the taking-up task handle.
Fig. 1 is the schematic diagram that the method to existing thread pool scheduler task is illustrated, as shown in Figure 1, supposing has two kinds of tasks in the task queue of server application, a kind of is lightweight task J1, the time of expending thread is 1 millisecond (ms), another kind is heavyweight task J2, and the time of expending thread is 100000ms; Also supposing in addition has two thread Th1 and Th2 in the thread pool, and the disposal ability of thread Th1 and thread Th2 is identical.
As shown in Figure 1, as long as Th1 and Th2 be idle, with regard to successively from task queue scheduler task handle, scheduling process is as follows:
Th1 has got task J2 at 0ms, and Th1 takies from 0ms to 100000ms by J2;
Th2 has got task J1 at 0ms, and Th2 takies from 0ms to 1ms by J1;
Th2 has got task J1 at 1ms, and Th2 takies from 1ms to 2ms by J1;
Th2 has got task J2 at 2ms, and Th2 takies from 2ms to 100002ms by J2;
Th1 has got task J1 at 100000ms, and Th1 is taken by J1 to 100001ms from 100000ms.
As can be seen, from 2ms to 100000ms, two thread Th1 and Th2 in the thread pool are taken by heavyweight task J2, just can obtain by the chance of thread process after making the lightweight task J1 of as easy as rolling off a log processing will wait during this period of time.Though heavyweight task and lightweight task are impartial to the chance of occupying of idle condition thread, in case the heavyweight task preemption thread in the thread pool, will take the long time.So in the method for existing thread pool scheduler task, the heavyweight task is different with the time scale that the lightweight task is occupied the thread in the thread pool.For example in example shown in Figure 1,3 lightweight task J1 and two lightweight task J2 are arranged in the formation, the time that the total time Totaltime that handling these five tasks needs equals to handle 3 J1 adds the time of handling 2 J2, that is:
Totaltime=3×1ms+2×100000ms
Therefore, time of accounting for the ratio RateJ1 of total time and handling 2 J2 time of handling 3 J1 accounts for the ratio RateJ2 of total time and is respectively:
RateJ1=3×1ms/Totaltime=3×1ms/(3×1ms+2×100000ms)=0.0015%
RateJ2=2×100000ms/Totaltime=2×100000ms/(3×1ms+2×100000ms)
=99.9985%
From top formula as can be seen, under the situation that the quantity of lightweight task and heavyweight task is more or less the same, it is much bigger that the heavyweight task takies the ratio of thread time, is in weak tendency during the thread resources of lightweight task in fighting for thread pool relatively.At this moment, the task of being easy to occur heavyweight has taken whole thread resources, and the lightweight task can not get the situation of thread process.
The situation that practical application middle heavyweight task takies whole thread resources is a lot, for example, Fig. 2 is the flow chart of intelligent network pre-payment signaling, in Fig. 2, calling mobile exchanging center/Visited Location Registor/Service Switching Point (MSCa/VLR/SSP) can be regarded client as, (SCPa) can regard server as at the calling service control point, relative other tasks with Apply Charging Report message (ACR) of Initial Detection Point message (IDP) are the heavyweight task, and Basic Call State Model BCSM event report message (ERB) is the lightweight task.
If the thread resources among the SCPa is dispatched by existing thread resources dispatching method, when the number of calls of per second is bigger, the thread resources of SCPa is all taken by IDP and ACR, and the ERB task in this flow process can not obtain the timely processing of SCPa thread, thereby cause session timeout, the percent of call lost improves.
From the above description as can be seen, because the task of heavyweight takies whole thread resources, can cause harmful effect to server application and client-side program.For server program, in the time period that all thread resources are all taken by the heavyweight task, other task in the task queue can not get scheduling and handles, make that being accumulated in the task queue of task is more and more, the Installed System Memory that server program takies is also increasing, may cause task queue congested at last, even cause the server application collapse.For client-side program, can cause the request response of client untimely, and when a task of client is made up of the task of a plurality of Different Weight levels, because all thread resources of server are all taken by the heavyweight task, the task of other magnitudes can't meet with a response, thereby causes the business of client to realize.
In a word, in the existing thread pool dispatching method, the part task takies the situation of whole resources, makes other task can't obtain the service of thread resources.
Summary of the invention
The embodiment of the invention provides a kind of resource allocation methods based on the task grouping, makes the task of different grouping obtain having equal opportunities of resource service.
The embodiment of the invention also provides a kind of resource allocation device based on the task grouping, makes the task of different grouping obtain having equal opportunities of resource service.
The embodiment of the invention discloses a kind of resource allocation methods based on the task grouping, this method comprises:
Receiving of task is divided into groups, obtain different task groupings;
Task weight according to resource sum and task total number packets and different task grouping is determined the expectation number of resources that described different task is divided into groups;
According to described expectation number of resources is different task packet allocation resources;
This method also comprises: receive new task, judge whether described new task belongs to the task grouping that has existed, if belong to, then described new task is assigned in the task grouping that has existed accordingly; Otherwise, newly-built task grouping, and described new task is assigned in the grouping of newly-built task, simultaneously, according to the task total number packets after changing, redefine the expectation number of resources of different task grouping, and be that resource is redistributed in different task groupings according to the expectation number of resources that redefines.
The invention also discloses a kind of resource allocation device, it is characterized in that this device comprises based on the task grouping:
The task grouping module is used for receiving of task is divided into groups, and with the Task Distribution of different grouping in corresponding task packet queue;
Administration module is used to the different task grouping calculation expectation number of resources in the described task grouping module, and is different task packet queue Resources allocation according to described expectation number of resources;
Described task grouping module is further used for receiving new task, judges that whether described new task belongs to the task grouping that has existed, if belong to, then is assigned to described new task in the task packet queue that has existed accordingly; Otherwise, a newly-built task packet queue, and described new task is assigned in the newly-built task packet queue;
Described administration module is further used for according to the task total number packets after changing, and recomputates the expectation number of resources of different task groupings, and is that resource is redistributed in different task groupings according to the described expectation number of resources that recomputates.
As seen from the above technical solution, embodiment of the invention scheme is at first divided into groups to task, obtain different task groupings, determine the expectation number of resources of different task grouping then, and be each different task packet queue Resources allocation according to described expectation number of resources, make the task of different grouping obtain having equal opportunities of resource service, the task of grouping takies whole resources thereby avoided partly, and the task of other groupings can't obtain the situation of resource service.
Description of drawings
Fig. 1 is the schematic diagram that the method to existing thread pool scheduler task is illustrated;
Fig. 2 is also the pay flow chart of signaling of intelligent network;
Fig. 3 is the flow chart of the embodiment of the invention based on the resource allocation methods of task grouping;
Fig. 4 is the structured flowchart of the embodiment of the invention based on the resource allocation device of task grouping;
Fig. 5 is the schematic diagram of the embodiment of the invention based on the thread pool Resource Allocation Formula of task grouping;
Fig. 6 is the structured flowchart of the embodiment of the invention based on the thread resources distributor of task grouping;
Fig. 7 is the flow chart of server application Processing tasks;
Fig. 8 is the flow chart of server application scheduling idle thread;
Fig. 9 is the flow chart of the busy thread of server application scheduling.
Embodiment
The embodiment of the invention obtains having equal opportunities of resource service in order to make various tasks, at first receiving of task is divided into groups, and determines the expectation number of resources of different task grouping then, and is different task packet allocation resource according to described expectation number of resources.
Fig. 3 is the flow chart of the embodiment of the invention based on the resource allocation methods of task grouping, may further comprise the steps:
Step 301 is divided into groups to receiving of task, obtains different task groupings.
Here, can divide into groups to receiving of task according to the number of Processing tasks resource requirement and/or the attribute of task, wherein, the attribute of task can be the affiliated client of task or the weight of task.
Step 302, according to the resource sum, and/or the task total number packets, and/or the task weight of different tasks grouping determines the expectation number of resources of different task grouping, and is different task packet allocation resource according to described expectation number of resources.
In this step, can determine the expectation number of resources of different task grouping according to the resource allocation formula shown in the formula (1), the concrete form of formula (1) is as follows:
R i = R × ( r i / Σ m = 1 M r m ) , i = 1,2 , . . . , M - - - ( 1 )
Wherein, expectation number of resources R iThe desired service device number of resources of i task grouping is distributed in expression;
Total number of resources R represents total server resource number;
r iRepresent the task weight in current i the task grouping;
r mRepresent m the task weight in the task grouping, m=1,2 ..., M;
Task grouping number M is represented total task grouping number.
The implication of formula (1) is: be the percentage that the expectation number of resources of i task packet allocation accounts for total number of resources, the weight that equals i the task in the task grouping accounts for the percentage of summation of the weight of the task of M task in dividing into groups.
Parameters R in formula (1), and/or r i, and/or M recomputates the resource allocation formula when changing, and is every group task distribution server resource again according to result of calculation.
Method shown in Figure 3 can guarantee to a certain extent that the task of different grouping obtains having equal opportunities of server resource service.
Fig. 4 is the structured flowchart of the embodiment of the invention based on the resource allocation device of task grouping, and this device comprises administration module 401 and task grouping module 402.
Administration module 401 is used to the different task grouping calculation expectation number of resources in the described task grouping module, and is different task packet queue Resources allocation according to described expectation number of resources.
Task grouping module 402 receives the task that client sends, receiving of task is divided into groups, and with the Task Distribution of different grouping in corresponding task packet queue;
Below be preferred embodiment with the thread pool resource of server end, the embodiment of the invention is further described.
Fig. 5 is the schematic diagram of the embodiment of the invention based on the thread pool Resource Allocation Formula of task grouping, as shown in Figure 5, the task of client is sent to the thread resources distributor 502 based on the task grouping after at first transmitting by the interface arrangement task actuator 501 of client and server end.Based on the thread resources distributor 502 of task grouping according to the type of task with Task Distribution in the task packet queue in the middle of the corresponding task packet queue, and dynamically the thread resources in the thread pool 503 is distributed to each group task according to the resource allocation formula.
In the embodiment shown in fig. 5, suppose that resource is the thread in the server, then the resource allocation formula shown in the formula (1) can be rewritten into following thread resources distribution formula:
N i = N × ( T i / Σ m = 1 M T m ) , i = 1,2 , . . . , M - - - ( 2 )
Wherein, expectation number of threads N iThe expectation number of threads of i task grouping is served in expression;
Total thread N represents number of threads total in the thread pool;
T iThe time of representing the required by task in i task grouping of a thread process;
T mThe time of representing required by task in m task grouping of a thread process;
Task grouping number M is represented total task grouping number.
Suppose that all tasks among Fig. 5 are divided into 3 groups according to the weight of heavyweight, i.e. M=3.The 1st group is the heavyweight task, and the 3rd group is the lightweight task, and the 2nd group is the task of heavyweight between the 1st group and the 3rd group.When the heavyweight of task was meant the thread process task here, task took the length of thread time.Every kind of task is arranged in the middle of separately the formation according to the sequencing that arrives.Total total N thread in the thread pool, carry out the thread resources distribution result according to resource allocation formula (2) and be: the number of threads of serving the 1st group task is N 1, the number of threads of serving the 2nd group task is N 2, the number of threads of serving the 3rd group task is N 3, and N 1+ N 2+ N 3=N.
Fig. 6 is the structured flowchart of the embodiment of the invention based on the thread resources distributor of task grouping, as shown in Figure 6, the thread resources distributor 502 based on the task grouping comprises: administration module 601, task grouping module 602, dynamic analysis module 603, thread resources scheduler module 604 and task queue overload detection module 605.
Administration module 601 is used for each task packet allocation thread that distributes formula (2) to be divided for task grouping module 602 according to thread resources.Administration module 601 also writes down the expectation number of threads N of i task grouping i, the required by task of active service in the actual number of threads of i kind task grouping, i task of a thread process are divided into groups time T i, total number of threads N, each task queue can be held in the thread pool maximum task number J MaxiIdentification information with each task packet queue.The identification information of task packet queue is used for the corresponding task grouping of mark uniquely.I=1 in the above-mentioned parameter, 2 ..., M.When total grouping number M changed, administration module 601 recomputated thread resources and distributes formula (2), and record result of calculation.
Task grouping module 602 is used to receive the task that client sends, and receiving of task is divided into groups, and the identification information of each different task packet queue is registered in the administration module 601.
Task grouping module 602 is before the Task Distribution of will be divided into groups is in the corresponding task packet queue, send detection notice to task queue overload detection module 605, when receiving the not overload notification that task queue overload detection module 605 is returned, again with the Task Distribution of different grouping in corresponding task packet queue.
Task grouping module 602 is the length according to the time of thread process required by task in the present embodiment, the i.e. grouping of being undertaken by the heavyweight size of task, such task that can guarantee the Different Weight level to a certain extent that is grouped in obtains having equal opportunities of thread process.Also can divide into groups according to the customer ID of task, such task that can guarantee different clients to a certain extent that is grouped in obtains having equal opportunities of thread process.By that analogy, can divide into groups, make not task on the same group obtain having equal opportunities of thread process according to the various attributes of task.
Task queue overload detection module 605, be used to receive the detection notice that task grouping module 602 sends after, read the maximum task number J that the corresponding task packet queue can hold from administration module 601 Maxi, and whether the task number in the corresponding task queue reaches J in the task of the inspection grouping module MaxiIf the task number in the corresponding task queue does not reach J Maxi, then task queue overload detection module 605 is returned the not notice of overload to task grouping module 602.
Task queue overload detection module 605 itself also can write down the maximum task number J that each task packet queue can hold Maxi, need not this moment to administration module 601 inquiries.
Dynamic analysis module 603 is used for obtaining from task grouping module 602 task of i task packet queue, and analyzes the time T of the required by task in i task packet queue of thread process i, with T iBe registered in the administration module 601, i=1,2 ..., M; And increasing a new task packet queue, be when occurring the task of M+1 kind attribute in the client task request, the analysis notice that receiving thread scheduling of resource module 604 sends, obtain task the formation of M+1 group task from task grouping module 602, analyze the time T of a required by task in this group task packet queue of thread process M+1, and with T M+1Be registered in the administration module 601.
Administration module 601 receives T M+1After, distribute formula to recomputate the expectation number of threads N of each task packet queue according to thread resources i, i=1,2 ..., M+1, and record result of calculation.
The weight of task is the time T of a required by task of thread process in the resource allocation formula in the present embodiment i, the task that task grouping module 602 receives itself is not carried about T iInformation, therefore need the dynamic analysis module to analyze the weight T of task in the different task grouping iBut the task that task grouping module 602 receives itself also can directly be carried task weight information, this moment, task grouping module 602 direct weights with task in the different task grouping were registered in the administration module, and did not need dynamic analysis module 603 to go the weight of analysis task.
Thread resources scheduler module 604 is used to dispatch idle thread that is in idle condition and the busy thread of handling current task.Promptly when total task grouping number changes, realize redistributing of resource by scheduling idling-resource and busy resource.
Thread resources scheduler module 604 scheduling idling-resources are:
At first thread resources scheduler module 604 is sent to idling-resource and is activated announcement, activates the thread that is in idle condition; Afterwards, whether there is the task packet queue of not determining the expectation number of resources as yet to administration module 601 inquiries, whether i.e. inquiry exists newly-increased task packet queue, if exist, then send and analyze notice, comprise the identification information of determining the task packet queue of expectation number of resources as yet in the analysis notice to dynamic analysis module 603; Dynamic analysis module 603 is according to according to obtaining task in the task packet queue of determining the expectation number of resources as yet of identification information from task grouping module 602 of analyzing in the notice, and this determines the time T of a required by task in the task packet queue of expectation number of resources as yet to analyze thread process M+1, and with T M+1Be registered in the administration module 601;
When not having the task packet queue of not determining the expectation number of resources as yet, whether thread resources scheduler module 604 exists actual number of threads less than expectation number of threads N to administration module 601 inquiries iThe task packet queue.Actual number of threads is meant the number of threads of active service in this task packet queue.If exist actual number of threads less than expectation number of threads N iThe task packet queue, then thread resources scheduler module 604 is added the thread that newly activates to actual number of threads less than expectation number of threads N iThe thread work space of task packet queue, and the actual number of threads of this task packet queue of registration updating in administration module 601; Otherwise discharge the thread that activates, make it come back to idle condition.
The busy resource of thread resources scheduler module 604 scheduling is:
Thread resources scheduler module 604 treats that busy thread handles after the current task, whether there is the task packet queue of not determining the expectation number of resources as yet to administration module 601 inquiries equally, if exist, then send to dynamic analysis module 603 and analyze notice, this determines the time T of a required by task in the task grouping of expectation number of resources as yet to make dynamic analysis module 603 analyze thread process M+1, and with T M+1Be registered in the administration module 601;
When not having the task packet queue of not determining the expectation number of resources as yet, whether thread resources scheduler module 604 exists actual number of threads less than expectation number of threads N to administration module 601 inquiries iThe task packet queue.If deposit actual number of threads less than expectation number of threads N iThe task packet queue, then thread resources scheduler module 604 is added busy thread to actual number of threads less than expectation number of threads N iThe thread work space of task packet queue, and the actual number of threads of this task packet queue of registration updating in administration module 601; Otherwise discharge busy thread, make it become idle thread.
In the said process, whether scheduling of resource module 604 exists the task packet queue of determining the expectation number of resources as yet to be to administration module 601 inquiry, and identification information and expectation number of resources by each different task packet queue of record in the searching and managing module 601 determine whether to exist the task packet queue of determining the expectation number of resources as yet.For example, write down the identification information of certain task grouping in the administration module 601, but do not write down the expectation number of resources of this task grouping, then this task grouping is the task grouping of not determining the expectation number of resources as yet.
Below by the process of describing thread in server application Processing tasks and the scheduling thread pond, further describe the technical scheme of the embodiment of the invention.
Fig. 7 is the flow chart of server application Processing tasks.Shown in Figure 7, may further comprise the steps:
Step 701 is determined the heavyweight of the task that client-side program is submitted to.
Whether step 702 exists the task packet queue of corresponding heavyweight in the heavyweight query task packet queue according to task, if there is execution in step 703, otherwise execution in step 704.
Step 703, whether the task packet queue that detects corresponding heavyweight transships, and checks that promptly whether task quantity in the corresponding task packet queue is greater than J Max, be execution in step 706 then, otherwise execution in step 707.
Step 704, a newly-built task packet queue, and task added in this task packet queue.
Step 705 distributes formula (2) to redistribute thread according to thread resources.Process ends.
Step 706, the task of refusal client is submitted to.Process ends.
Step 707 is added task in the corresponding task packet queue to, and the idle thread in thread pool is sent the activation announcement.
Redistributing thread according to thread resources distribution formula (2) described in the step 705 is, by what realize with certain method scheduling idle thread and busy thread.Introduce idle thread provided by the invention and busy thread scheduling method below.
Fig. 8 is the flow chart of server application scheduling idle thread.As shown in Figure 8, may further comprise the steps:
Step 801, the idle thread in thread pool are sent and are activated announcement, are in the thread of idle condition with activation.
Step 802, whether the time sequencing inquiry of creating according to the task packet queue exists the task grouping of not determining the expectation number of resources as yet, if exist, execution in step 803, otherwise, execution in step 804.
Step 803, the self study by thread obtain the time T that one of thread process is determined the required by task in the task grouping of expectation number of resources as yet M+1, distribute formula (2) to recomputate according to thread resources and serve the expectation number of threads N of each task grouping i, and record result of calculation.Execution in step 802.
Step 804, the task whether the time sequencing inquiry of creating according to the task packet queue exists actual number of threads not reach desired value is divided into groups, if there is execution in step 805; Otherwise, execution in step 806.
Actual number of threads does not reach the task grouping of desired value, is meant that active service is less than the expectation number of threads N of this task packet queue that is calculated by thread resources distribution formula (2) in the number of threads of this task grouping i
Step 805 is added the idle thread that activates in the thread work space of the task grouping that actual number of threads do not reach desired value.Process ends.
Step 806 discharges this thread, makes thread get back to idle condition, waits to be activated.
Fig. 9 is the flow chart of the busy thread of server application scheduling.As shown in Figure 9, may further comprise the steps:
Step 901, when thread is busy, after the busy thread of wait is finished handling of task, execution in step 902.
Step 902, whether the time sequencing inquiry of creating according to the task packet queue exists the task grouping of not determining the expectation number of resources as yet, if exist, execution in step 903, otherwise, execution in step 904.
Step 903, the self study by thread obtain the time T that one of thread process is determined the required by task in the task grouping of expectation number of resources as yet M+1, distribute formula (2) to recomputate according to thread resources and serve the expectation number of threads N of each task grouping i, and record result of calculation.Execution in step 902.
Step 904, the task whether the time sequencing inquiry of creating according to the task packet queue exists actual number of threads not reach desired value is divided into groups, if there is execution in step 905; Otherwise, execution in step 906.
Step 905 is added the thread that activates in the thread work space of the task grouping that actual number of threads do not reach desired value.Process ends.
Step 906 discharges this thread, makes thread become idle thread, waits to be activated.
In the process of above-mentioned server application scheduling thread, at first whether inquiry exists the task grouping of not determining the expectation number of resources as yet, if exist, then the weight of task in this task grouping of precedence parse redefines the expectation number of resources that each task is divided into groups then.The task grouping that this scheme has been avoided increasing newly can not get the situation of thread process for a long time.
The above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention, all any modifications of being made within the spirit and principles in the present invention, is equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1, a kind of resource allocation methods based on the task grouping is characterized in that this method comprises:
Receiving of task is divided into groups, obtain different task groupings;
Task weight according to resource sum and task total number packets and different task grouping is determined the expectation number of resources that described different task is divided into groups;
According to described expectation number of resources is different task packet allocation resources;
This method also comprises: receive new task, judge whether described new task belongs to the task grouping that has existed, if belong to, then described new task is assigned in the task grouping that has existed accordingly; Otherwise, newly-built task grouping, and described new task is assigned in the grouping of newly-built task, simultaneously, according to the task total number packets after changing, redefine the expectation number of resources of different task grouping, and be that resource is redistributed in different task groupings according to the expectation number of resources that redefines.
2, the method for claim 1 is characterized in that, in described new task being assigned to the task grouping that has existed accordingly before, further comprise: determine overload of the corresponding task grouping that has existed.
3, the method for claim 1 is characterized in that, this method also comprises: redefining the expectation number of resources of different task groupings, is different task packet allocation resources according to the expectation number of resources that redefines, and comprising:
Whether inquiry exists as yet the task grouping of determining the expectation number of resources, if exist, then according to the task weight of resource sum and task total number packets and current different task grouping, redefines the expectation number of resources that current different task is divided into groups;
When not having the task grouping of not determining the expectation number of resources as yet, whether inquiry exists the task grouping of real resource number less than described expectation number of resources, if exist, then give of the task grouping of described real resource number less than the expectation number of resources with idling-resource or the busy resource allocation of handling current task; Otherwise, discharge described idling-resource or handle the busy resource of current task.
4, the method for claim 1 is characterized in that, receiving of task is divided into groups to be specially: divide into groups according to the number of handling described reception required by task resource and/or the attribute of described reception task.
As the described arbitrary method of claim 1 to 4, it is characterized in that 5, described resource is specially processor, thread, internal memory or the bandwidth of server end.
6, a kind of resource allocation device based on the task grouping is characterized in that this device comprises:
The task grouping module is used for receiving of task is divided into groups, and with the Task Distribution of different grouping in corresponding task packet queue;
Administration module is used to the different task grouping calculation expectation number of resources in the described task grouping module, and is different task packet queue Resources allocation according to described expectation number of resources;
Described task grouping module is further used for receiving new task, judges that whether described new task belongs to the task grouping that has existed, if belong to, then is assigned to described new task in the task packet queue that has existed accordingly; Otherwise, a newly-built task packet queue, and described new task is assigned in the newly-built task packet queue;
Described administration module is further used for according to the task total number packets after changing, and recomputates the expectation number of resources of different task groupings, and is that resource is redistributed in different task groupings according to the described expectation number of resources that recomputates.
7, device as claimed in claim 6, it is characterized in that, this device further comprises: the dynamic analysis module, be used for obtaining task from the different task packet queue of described task grouping module, analyze the weight of task in the different task packet queue, and the weight of task in the different task packet queue is registered in the described administration module;
Described task grouping module is further used for the identification information of different task packet queue is registered in the described administration module;
Described administration module is further used for, receive and write down the registered value of task weight in the different task packet queue that described dynamic analysis module sends, the identification information of different task packet queue that described task grouping module sends, and the expectation number of resources of the different task packet queue that calculates of record and the real resource number that actual allocated is given the different task packet queue; Administration module calculates the expectation number of resources that described different task is divided into groups according to the task weight of resource sum and task total number packets and different task grouping.
8, device as claimed in claim 7, it is characterized in that, this device further comprises: the scheduling of resource module, be used for whether having the task packet queue of not determining the expectation number of resources as yet from described administration module inquiry, when having the task packet queue of determining the expectation number of resources as yet, send the analysis notice to described dynamic analysis module;
Described dynamic analysis module is further used for, reception is from the analysis notice of scheduling of resource module, obtain task in the task packet queue of not determining the expectation number of resources as yet from described grouping module, analyze the weight of task in the task packet queue of determining the expectation number of resources as yet, and analysis is obtained task weight be registered in the described administration module;
Described administration module is further used for, and receives the registered value from the task weight of described dynamic analysis module, recomputates and write down the expectation number of resources that current different task is divided into groups.
9, device according to claim 8 is characterized in that, when not having the task packet queue of not determining the expectation number of resources as yet,
Described scheduling of resource module is further used for, whether there be the task packet queue of real resource number from described administration module inquiry less than the expectation number of resources, when existing, give the task packet queue of described real resource number with idling-resource or the busy resource allocation of handling current task, and upgrade the described real resource number that writes down in the administration module real resource number less than the task packet queue of expectation number of resources less than the expectation number of resources;
When not existing, described scheduling of resource module releasing idling-resource or handle the busy resource of current task.
10, device as claimed in claim 6, it is characterized in that, this device further comprises: task queue overload detection module, be used to receive detection notice from described task grouping module, whether detect corresponding task packet queue transships, when not transshipping, send not overload notification to described task grouping module;
Described task grouping module is further used for, and receives the not overload notification from task queue overload detection module, with described Task Distribution of having divided into groups in corresponding task packet queue.
CNB2006101564647A 2006-12-31 2006-12-31 A kind of resource allocation methods and device based on the task grouping Expired - Fee Related CN100542139C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101564647A CN100542139C (en) 2006-12-31 2006-12-31 A kind of resource allocation methods and device based on the task grouping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101564647A CN100542139C (en) 2006-12-31 2006-12-31 A kind of resource allocation methods and device based on the task grouping

Publications (2)

Publication Number Publication Date
CN101009642A CN101009642A (en) 2007-08-01
CN100542139C true CN100542139C (en) 2009-09-16

Family

ID=38697785

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101564647A Expired - Fee Related CN100542139C (en) 2006-12-31 2006-12-31 A kind of resource allocation methods and device based on the task grouping

Country Status (1)

Country Link
CN (1) CN100542139C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108370499A (en) * 2015-10-27 2018-08-03 黑莓有限公司 Resource is detected to access

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140528B (en) * 2007-08-31 2013-03-20 中兴通讯股份有限公司 Method and device for realizing timing tasks load in cluster
CN101562622B (en) * 2009-06-05 2012-09-26 杭州华三通信技术有限公司 Method for executing user request and corresponding server thereof
US20120066396A1 (en) * 2010-09-10 2012-03-15 Samsung Electronics Co. Ltd. Apparatus and method for supporting periodic multicast transmission in machine to machine communication system
CN102467412B (en) * 2010-11-16 2015-04-22 金蝶软件(中国)有限公司 Method, device and business system for processing operation request
CN102567086B (en) * 2010-12-30 2014-05-07 中国移动通信集团公司 Task scheduling method, equipment and system
CN102307198A (en) * 2011-08-30 2012-01-04 苏州阔地网络科技有限公司 Audio and video data transmission method
CN102333226A (en) * 2011-09-01 2012-01-25 苏州阔地网络科技有限公司 Audio/video data transmission method
US9465662B2 (en) * 2011-10-17 2016-10-11 Cavium, Inc. Processor with efficient work queuing
CN103179285B (en) * 2011-12-21 2015-10-07 中国移动通信集团山西有限公司 A kind of acquisition method of CDR file and device
CN103248644B (en) * 2012-02-08 2016-07-06 腾讯科技(深圳)有限公司 The load-balancing method of a kind of plug-in unit upgrading Detection task and device
CN102629220A (en) * 2012-03-08 2012-08-08 北京神州数码思特奇信息技术股份有限公司 Dynamic task allocation and management method
CN103533002A (en) * 2012-07-05 2014-01-22 阿里巴巴集团控股有限公司 Data processing method and system
CN102902573B (en) * 2012-09-20 2014-12-17 北京搜狐新媒体信息技术有限公司 Task processing method and device based on shared resources
CN103365729A (en) * 2013-07-19 2013-10-23 哈尔滨工业大学深圳研究生院 Dynamic MapReduce dispatching method and system based on task type
CN103810048B (en) * 2014-03-11 2017-01-18 国家电网公司 Automatic adjusting method and device for thread number aiming to realizing optimization of resource utilization
CN104750556A (en) * 2015-04-14 2015-07-01 浪潮电子信息产业股份有限公司 Method and device for dispatching HPC (high performance computing) cluster work
CN106557366B (en) * 2015-09-28 2020-09-08 阿里巴巴集团控股有限公司 Task distribution method, device and system
CN107643944A (en) * 2016-07-21 2018-01-30 阿里巴巴集团控股有限公司 A kind of method and apparatus of processing task
CN107220077B (en) * 2016-10-20 2019-03-19 华为技术有限公司 Using the management-control method and management and control devices of starting
CN108965364B (en) * 2017-05-22 2021-06-11 杭州海康威视数字技术股份有限公司 Resource allocation method, device and system
CN107341056A (en) * 2017-07-05 2017-11-10 郑州云海信息技术有限公司 A kind of method and device of the thread distribution based on NFS
CN109426561A (en) * 2017-08-29 2019-03-05 阿里巴巴集团控股有限公司 A kind of task processing method, device and equipment
CN109697118A (en) * 2017-10-20 2019-04-30 北京京东尚科信息技术有限公司 Streaming computing task management method, device, electronic equipment and storage medium
CN109063037A (en) * 2018-07-17 2018-12-21 叶舒婷 A kind of querying method, service equipment, terminal device and computer readable storage medium
CN109614222B (en) * 2018-10-30 2022-04-08 成都飞机工业(集团)有限责任公司 Multithreading resource allocation method
CN109669776B (en) * 2018-12-12 2023-08-04 北京文章无忧信息科技有限公司 Detection task processing method, device and system
CN111338882A (en) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 Data monitoring method, device, medium and electronic equipment
CN110347489B (en) 2019-07-12 2021-08-03 之江实验室 Multi-center data collaborative computing stream processing method based on Spark
CN112667369A (en) * 2020-06-08 2021-04-16 宸芯科技有限公司 Thread scheduling method and device, storage medium and electronic equipment
CN112559148A (en) * 2020-12-14 2021-03-26 用友网络科技股份有限公司 Execution method, execution device and execution system of ordered tasks
CN114283046B (en) * 2021-11-19 2022-08-19 广州市城市规划勘测设计研究院 Point cloud file registration method and device based on ICP (inductively coupled plasma) algorithm and storage medium
CN114565284A (en) * 2022-03-02 2022-05-31 北京百度网讯科技有限公司 Task allocation method, system, electronic device and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108370499A (en) * 2015-10-27 2018-08-03 黑莓有限公司 Resource is detected to access
US10952087B2 (en) 2015-10-27 2021-03-16 Blackberry Limited Detecting resource access
CN108370499B (en) * 2015-10-27 2022-05-10 黑莓有限公司 Detecting resource access

Also Published As

Publication number Publication date
CN101009642A (en) 2007-08-01

Similar Documents

Publication Publication Date Title
CN100542139C (en) A kind of resource allocation methods and device based on the task grouping
CN102567086B (en) Task scheduling method, equipment and system
CN109471705A (en) Method, equipment and system, the computer equipment of task schedule
RU2293444C2 (en) Method for dispatching scanning of devices for reading data
CN103036926B (en) Business push system and method
CN101984414B (en) Method and device for scheduling central processing unit (CPU) resource
CN102223419A (en) Virtual resource dynamic feedback balanced allocation mechanism for network operation system
CN108681481A (en) The processing method and processing device of service request
EP2053786A3 (en) Targeted resource allocation
Koole et al. Exponential approximation of multi-skill call centers architecture
CN102487494A (en) Short message flow control method and system
CN102088781A (en) Carrier distribution method
CN107343112A (en) Intelligent traffic distribution method based on the layering of call center's seat
CN113312160A (en) Techniques for behavioral pairing in a task distribution system
CN101114984A (en) Multithreading network load control method
CN109800261A (en) Dynamic control method, device and the relevant device of double data library connection pool
CN108282526A (en) Server dynamic allocation method and system between double clusters
CN114745606A (en) Flexible industrial data acquisition system and method based on rule scheduling
CN109815008A (en) Hadoop cluster user resource monitoring method and system
CN107203256A (en) Energy-conservation distribution method and device under a kind of network function virtualization scene
CN101951571A (en) Short message retrying method and short message gateway
CN106325997B (en) Virtual resource allocation method and device
US20070150907A1 (en) Scheduling method for remote object procedure call and system thereof
CN111427674A (en) Micro-service management method, device and system
CN109933433A (en) A kind of GPU resource scheduling system and its dispatching method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090916

Termination date: 20121231