CN102122252A - Method for designing thread pool capable of ensuring temporal succession - Google Patents

Method for designing thread pool capable of ensuring temporal succession Download PDF

Info

Publication number
CN102122252A
CN102122252A CN 201110060502 CN201110060502A CN102122252A CN 102122252 A CN102122252 A CN 102122252A CN 201110060502 CN201110060502 CN 201110060502 CN 201110060502 A CN201110060502 A CN 201110060502A CN 102122252 A CN102122252 A CN 102122252A
Authority
CN
China
Prior art keywords
thread
task
load
value
load value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110060502
Other languages
Chinese (zh)
Other versions
CN102122252B (en
Inventor
王非
黄本雄
卢正新
全中伟
邓磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN 201110060502 priority Critical patent/CN102122252B/en
Publication of CN102122252A publication Critical patent/CN102122252A/en
Application granted granted Critical
Publication of CN102122252B publication Critical patent/CN102122252B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention relates to a method for designing a thread pool capable of ensuring temporal succession. The method comprises the following steps of: maintaining a task for the thread pool and marking the task into to a Hash mapping table of a thread ID, for managing the task distribution; adding an attribute parameter to thread parameters: a load (nLoad) used for identifying the total quantity of the distributed tasks of each thread; and instead of sharing the same task queue by all the threads, configuring a corresponding task queue for each thread in the thread pool, wherein each thread can search for a new task from the corresponding task queue, and a single monitoring thread is arranged. The method provided by the invention has the advantages that: 1) by utilizing the new added function of the mapping table, the temporal successions for inputting and outputting data of each task can be guaranteed to be completely consistent, and 2) the loads on all threads can be guaranteed to be always consistent by the monitoring thread started at regular time, thereby achieving the best concurrent property.

Description

A kind of thread pool method for designing that guarantees time continuity
Technical field
The present invention relates to a kind of method for designing of thread pool, refer more particularly to the thread pool method for designing that guarantees time continuity.
Background technology
Thread pool is a kind of processing form of multithreading.Thread pool can be pre-created the task of some threads with executive utility, and these threads are normally with the form tissue of formation.Generally, the task quantity that need carry out of thread pool is greater than the quantity of thread in the thread pool.After certain thread had been finished a task, can please looking for novelty in task queue of task was carried out, and the task in task queue is all finished.Afterwards the thread in the thread pool will be suspended or dormancy up to there being new task to arrive.
Traditional thread pool all can have the difference of priority for carrying out of task, is complete fair play for the task of same priority.But equality and be not suitable for all application scenarioss completely.For example in real-time data base, for all free successional requirement of all data, but because all tasks executed in parallel in thread pool, may back arriving of task finish, thereby make the data of thread pool output and the data of input on time continuity, put upside down prior to task before.
Summary of the invention
Technical matters to be solved by this invention provide a kind of in real-time data base the method for designing of the used thread pool of deal with data.This method can also guarantee the time continuity of data that each thread is handled in the thread pool and the load balancing of each thread when data-handling capacity is provided efficiently.
For solving the problems of the technologies described above, thread pool of the present invention need be safeguarded the Hash mapping table of a task identification to Thread Id, is used for the distribution of task is managed; The thread parameter increases a property parameters: load (nLoad), in order to identify the task total amount that each thread distributes, this parameter quantizes with the frequency (nFreq) that this task in a period of time arrives thread pool, wherein the length of time period (nPeriod) can be provided with (unit for minute) according to application scenarios, and quantitative formula is as follows:
nLoad = nPeriod * 60 nFreq
If the nLoad result who calculates is 0, then be designated as 1; All threads are no longer shared same task queue, but are the corresponding with it task queue of each thread configuration in the thread pool, and each thread is searched new task from corresponding task queue; One monitor thread is arranged separately, be in dormant state at ordinary times, can regularly start the loading condition of each thread of monitoring, the standard of judgement is the load parameter of thread.
In the such scheme, described step is as follows:
Step 1, initialization, thread pool is created N thread by being provided with, and initialization task is identified to the Hash mapping table of Thread Id, the task queue of each thread of initialization, the load attribute LThread in the thread is initialized as 0;
Step 2, the interpolation of task, the load value LTask that at first calculates new interpolation task according to the load calculation formula; Use the sequential search method to find out the lightest thread of load in the thread pool; Give this thread with this Task Distribution, updating task is identified to the Hash mapping table of Thread Id simultaneously, and the load value LTask of this task is added on the former load value LThread of corresponding thread, promptly
LThread Newly=LThread Old+ Ltask;
Step 3, the deletion of task is at first deleted the load value of this task from the load value of corresponding thread, and updating task is identified to the Hash mapping table of Thread Id then, and the mapping relations of this task are deleted from mapping table;
Step 4, the renewal of task when the frequency of task changes, need be upgraded its corresponding threads load, and the mode that adopts this task of deletion earlier to add this task is again finished;
Step 5, the load balancing of thread pool, the interpolation of task is to carry out according to the standard that each threads load of assurance reaches unanimity in second step.But when constantly going out the deletion of current task, the loading condition of each thread will occur inconsistent, and wherein the worst situation is that only remaining a certain thread has task, and the whole dormancy of other threads directly cause concurrent performance to descend.Therefore the loading condition that needs each thread in the monitor thread timing scan thread pool, the form according to the employing variance of judgement:
vLoad = ( Load 1 - mLoad ) 2 + . . . + ( Loadn - mLoad ) 2 N
Wherein vLoad represents the variance yields of load, and N represents the number of thread, Load i(i=1,2,3 ... N) represent the load value of each thread, and mLoad represents the mean value of all load values, as follows:
mLoad = Load 1 + Load 2 + . . . + Loadn N
The load variance of calculating gained when monitor thread surpasses LoadMax (be defaulted as 100, can be made as littler), then will carry out the operation of load balancing, and the task that load is departed from the bigger thread of draw value is re-assigned on the less thread of load;
Step 6, task executions when a certain task arrives, is at first obtained the Thread Id of this task of processing from mapping table according to its task identification, then this task is delivered in the task queue of this thread correspondence to get final product.Each thread obtains new task and carries out from task queue separately.
In the such scheme, described method is for the task of arriving thread pool, at first in mapping table, search the Thread Id that these task needs are delivered according to task identification by the task distribution module, then task is sent in the task queue of corresponding thread, the task that thread in the thread pool constantly please be looked for novelty from task queue is separately carried out, and exports the result then.
Preferably, described step 2 may further comprise the steps:
Step 201, working load value computing formula according to the frequency of new interpolation task, calculates the load numerical value of this task;
The mode that step 202, employing travel through in proper order finds out one of thread of load value minimum in all threads;
Step 203 is added this load value on the minimum load thread that finds in 202 to;
Step 204 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 205 judges whether the variance yields that calculates in 204 surpasses default LoadMax (default value is 100);
Step 206 if surpassed LoadMax, need be carried out load balancing;
Step 207 is calculated the new hash value that adds task identification, and is added in the mapping table.If add failure, the load value of this task need be deleted from the load value of corresponding thread.
Preferably, described step 3 may further comprise the steps:
Step 301 is calculated the hash value that needs deleted task identification;
Step 302 finds the Thread Id of this task correspondence by hash value in mapping table, and notes the load value of this task;
Step 303 deducts the load value of this task from the total load value of corresponding thread;
Step 304 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 305 judges whether the variance yields that calculates in 304 surpasses default LoadMax (default value is 100);
Step 306 if surpassed LoadMax, need be carried out load balancing;
Step 307 is deleted this task from mapping table.If the deletion failure need recover the load value of this task from the load value of corresponding thread.
Preferably, described step 5 may further comprise the steps:
Step 401, monitor thread generally are in dormant state, regularly activate to start (default time is spaced apart 60 minutes);
Step 402 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 403 judges that whether the variance yields that calculates in 402 surpasses default LoadMax (default value is 100), if do not exceed, reenters dormant state;
Step 404﹠amp; 405, if exceed LoadMax, travel through the load value of all threads, one of the thread of finding out the load value maximum and one of thread of load value minimum;
Step 406, arbitrary task transfers is to the thread of load minimum in the thread with the load maximum;
Step 407, updating task are identified to the mapping table of Thread Id, the mapping Thread Id of the task of being moved are changed into the ID of less thread.
The present invention is owing to adopted technique scheme to have following advantage:
1, the present invention utilizes newly-increased mapping table function, and the time continuity in the time of can guaranteeing the data input and output of each task is in full accord.
2, can guarantee that by the monitor thread that regularly starts the load on each thread can reach unanimity, to reach best concurrent performance.
Description of drawings
Below in conjunction with the drawings and specific embodiments technical scheme of the present invention is further described in detail.
Fig. 1 is a thread pool structural drawing of the present invention;
Fig. 2 is the schematic flow sheet of interpolation new task of the present invention;
Fig. 3 is the schematic flow sheet of deletion task of the present invention;
Fig. 4 is a load balancing schematic flow sheet of the present invention.
Embodiment
As shown in Figure 1, for the task of arriving thread pool, at first in mapping table, search the Thread Id that these task needs are delivered according to task identification by the task distribution module, then task is sent in the task queue of corresponding thread, the task that thread in the thread pool constantly please be looked for novelty from task queue is separately carried out, and exports the result then.
Fig. 2 is the flow process of adding new task in thread pool, and concrete steps are as follows:
Step 201, working load value computing formula according to the frequency of new interpolation task, calculates the load numerical value of this task.
The mode that step 202, employing travel through in proper order finds out one of thread of load value minimum in all threads.
Step 203 is added this load value on the minimum load thread that finds in 202 to.
Step 204 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool.
Step 205 judges whether the variance yields that calculates in 204 surpasses default LoadMax (default value is 100).
Step 206 if surpassed LoadMax, need be carried out load balancing, and detailed process sees that step 401 is to 407.
Step 207 is calculated the new hash value that adds task identification, and is added in the mapping table.If add failure, the load value of this task need be deleted from the load value of corresponding thread.
Fig. 3 is the flow process of deletion task from thread pool, the following steps that this flow process specifically comprises:
Step 301 is calculated the hash value that needs deleted task identification.
Step 302 finds the Thread Id of this task correspondence by hash value in mapping table, and notes the load value of this task.
Step 303 deducts the load value of this task from the total load value of corresponding thread.
Step 304 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool.
Step 305 judges whether the variance yields that calculates in 304 surpasses default LoadMax (default value is 100).
Step 306 if surpassed LoadMax, need be carried out load balancing, and detailed process sees that step 401 is to 407.
Step 307 is deleted this task from mapping table.If the deletion failure need recover the load value of this task from the load value of corresponding thread.
Fig. 4 is the flow process of thread pool load balancing, the following steps that this flow process specifically comprises:
Step 401, monitor thread generally are in dormant state, regularly activate to start (default time is spaced apart 60 minutes).
Step 402 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool.
Step 403 judges that whether the variance yields that calculates in 402 surpasses default LoadMax (default value is 100), if do not exceed, reenters dormant state.
Step 404﹠amp; 405, if exceed LoadMax, travel through the load value of all threads, one of the thread of finding out the load value maximum and one of thread of load value minimum.
Step 406, arbitrary task transfers is to the thread of load minimum in the thread with the load maximum.
Step 407, updating task are identified to the mapping table of Thread Id, the mapping Thread Id of the task of being moved are changed into the ID of less thread.
It should be noted last that, above embodiment is only unrestricted in order to technical scheme of the present invention to be described, although the present invention is had been described in detail with reference to preferred embodiment, those of ordinary skill in the art is to be understood that, can make amendment or be equal to replacement technical scheme of the present invention, and not breaking away from the spirit and scope of technical solution of the present invention, it all should be encompassed in the middle of the claim scope of the present invention.

Claims (8)

1. a thread pool method for designing that guarantees time continuity is characterized in that, described thread pool need be safeguarded the Hash mapping table of a task identification to Thread Id, is used for the distribution of task is managed; The thread parameter increases a property parameters: load (nLoad), in order to identify the task total amount that each thread distributes, this parameter quantizes with the frequency (nFreq) that this task in a period of time arrives thread pool, wherein the length of time period (nPeriod) can be provided with (unit for minute) according to application scenarios, and quantitative formula is as follows:
nLoad = nPeriod * 60 nFreq
If the nLoad result who calculates is 0, then be designated as 1; All threads are no longer shared same task queue, but are the corresponding with it task queue of each thread configuration in the thread pool, and each thread is searched new task from corresponding task queue; One monitor thread is arranged separately, be in dormant state at ordinary times, can regularly start the loading condition of each thread of monitoring, the standard of judgement is the load parameter of thread.
2. a kind of thread pool method for designing that guarantees time continuity according to claim 1 is characterized in that described step is as follows:
Step 1, initialization, thread pool is created N thread by being provided with, and initialization task is identified to the Hash mapping table of Thread Id, the task queue of each thread of initialization, the load attribute LThread in the thread is initialized as 0;
Step 2, the interpolation of task, the load value LTask that at first calculates new interpolation task according to the load calculation formula; Use the sequential search method to find out the lightest thread of load in the thread pool; Give this thread with this Task Distribution, updating task is identified to the Hash mapping table of Thread Id simultaneously, and the load value LTask of this task is added on the former load value LThread of corresponding thread, promptly
LThread Newly=LThread Old+ Ltask;
Step 3, the deletion of task is at first deleted the load value of this task from the load value of corresponding thread, and updating task is identified to the Hash mapping table of Thread Id then, and the mapping relations of this task are deleted from mapping table;
Step 4, the renewal of task when the frequency of task changes, need be upgraded its corresponding threads load, and the mode that adopts this task of deletion earlier to add this task is again finished;
Step 5, the load balancing of thread pool, the interpolation of task is to carry out according to the standard that each threads load of assurance reaches unanimity in second step.But when constantly going out the deletion of current task, the loading condition of each thread will occur inconsistent, and wherein the worst situation is that only remaining a certain thread has task, and the whole dormancy of other threads directly cause concurrent performance to descend.Therefore the loading condition that needs each thread in the monitor thread timing scan thread pool, the form according to the employing variance of judgement:
vLoad = ( Load 1 - mLoad ) 2 + . . . + ( Loadn - mLoad ) 2 N
Wherein vLoad represents the variance yields of load, and N represents the number of thread, Load i(i=1,2,3 ... N) represent the load value of each thread, and mLoad represents the mean value of all load values, as follows:
mLoad = Load 1 + Load 2 + . . . + Loadn N
The load variance of calculating gained when monitor thread surpasses LoadMax (be defaulted as 100, can be made as littler), then will carry out the operation of load balancing, and the task that load is departed from the bigger thread of draw value is re-assigned on the less thread of load;
Step 6, task executions when a certain task arrives, is at first obtained the Thread Id of this task of processing from mapping table according to its task identification, then this task is delivered in the task queue of this thread correspondence to get final product.Each thread obtains new task and carries out from task queue separately.
3. a kind of thread pool method for designing that guarantees time continuity according to claim 2, it is characterized in that, described method is for the task of arriving thread pool, at first in mapping table, search the Thread Id that these task needs are delivered according to task identification by the task distribution module, then task is sent in the task queue of corresponding thread, the task that thread in the thread pool constantly please be looked for novelty from task queue is separately carried out, and exports the result then.
4. according to claim 2 or 3 described a kind of thread pool methods for designing that guarantee time continuity, it is characterized in that described step 2 may further comprise the steps:
Step 201, working load value computing formula according to the frequency of new interpolation task, calculates the load numerical value of this task;
The mode that step 202, employing travel through in proper order finds out one of thread of load value minimum in all threads;
Step 203 is added this load value on the minimum load thread that finds in 202 to;
Step 204 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 205 judges whether the variance yields that calculates in 204 surpasses default LoadMax (default value is 100);
Step 206 if surpassed LoadMax, need be carried out load balancing;
Step 207 is calculated the new hash value that adds task identification, and is added in the mapping table.If add failure, the load value of this task need be deleted from the load value of corresponding thread.
5. according to claim 2 or 3 described a kind of thread pool methods for designing that guarantee time continuity, it is characterized in that described step 3 may further comprise the steps:
Step 301 is calculated the hash value that needs deleted task identification;
Step 302 finds the Thread Id of this task correspondence by hash value in mapping table, and notes the load value of this task;
Step 303 deducts the load value of this task from the total load value of corresponding thread;
Step 304 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 305 judges whether the variance yields that calculates in 304 surpasses default LoadMax (default value is 100);
Step 306 if surpassed LoadMax, need be carried out load balancing;
Step 307 is deleted this task from mapping table.If the deletion failure need recover the load value of this task from the load value of corresponding thread.
6. a kind of thread pool method for designing that guarantees time continuity according to claim 4 is characterized in that described step 3 may further comprise the steps:
Step 301 is calculated the hash value that needs deleted task identification;
Step 302 finds the Thread Id of this task correspondence by hash value in mapping table, and notes the load value of this task;
Step 303 deducts the load value of this task from the total load value of corresponding thread;
Step 304 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 305 judges whether the variance yields that calculates in 304 surpasses default LoadMax (default value is 100);
Step 306 if surpassed LoadMax, need be carried out load balancing;
Step 307 is deleted this task from mapping table.If the deletion failure need recover the load value of this task from the load value of corresponding thread.
7. a kind of thread pool method for designing that guarantees time continuity according to claim 5 is characterized in that described step 5 may further comprise the steps:
Step 401, monitor thread generally are in dormant state, regularly activate to start (default time is spaced apart 60 minutes);
Step 402 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 403 judges that whether the variance yields that calculates in 402 surpasses default LoadMax (default value is 100), if do not exceed, reenters dormant state;
Step 404﹠amp; 405, if exceed LoadMax, travel through the load value of all threads, one of the thread of finding out the load value maximum and one of thread of load value minimum;
Step 406, arbitrary task transfers is to the thread of load minimum in the thread with the load maximum;
Step 407, updating task are identified to the mapping table of Thread Id, the mapping Thread Id of the task of being moved are changed into the ID of less thread.
8. a kind of thread pool method for designing that guarantees time continuity according to claim 6 is characterized in that described step 5 may further comprise the steps:
Step 401, monitor thread generally are in dormant state, regularly activate to start (default time is spaced apart 60 minutes);
Step 402 is utilized the variance computing formula, calculates the load variance of all threads in the thread pool;
Step 403 judges that whether the variance yields that calculates in 402 surpasses default LoadMax (default value is 100), if do not exceed, reenters dormant state;
Step 404﹠amp; 405, if exceed LoadMax, travel through the load value of all threads, one of the thread of finding out the load value maximum and one of thread of load value minimum;
Step 406, arbitrary task transfers is to the thread of load minimum in the thread with the load maximum;
Step 407, updating task are identified to the mapping table of Thread Id, the mapping Thread Id of the task of being moved are changed into the ID of less thread.
CN 201110060502 2011-03-14 2011-03-14 Method for designing thread pool capable of ensuring temporal succession Expired - Fee Related CN102122252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110060502 CN102122252B (en) 2011-03-14 2011-03-14 Method for designing thread pool capable of ensuring temporal succession

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110060502 CN102122252B (en) 2011-03-14 2011-03-14 Method for designing thread pool capable of ensuring temporal succession

Publications (2)

Publication Number Publication Date
CN102122252A true CN102122252A (en) 2011-07-13
CN102122252B CN102122252B (en) 2013-06-19

Family

ID=44250814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110060502 Expired - Fee Related CN102122252B (en) 2011-03-14 2011-03-14 Method for designing thread pool capable of ensuring temporal succession

Country Status (1)

Country Link
CN (1) CN102122252B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629220A (en) * 2012-03-08 2012-08-08 北京神州数码思特奇信息技术股份有限公司 Dynamic task allocation and management method
CN103455377A (en) * 2013-08-06 2013-12-18 北京京东尚科信息技术有限公司 System and method for managing business thread pool
CN103473138A (en) * 2013-09-18 2013-12-25 柳州市博源环科科技有限公司 Multi-tasking queue scheduling method based on thread pool
CN104166589A (en) * 2013-05-17 2014-11-26 阿里巴巴集团控股有限公司 Heartbeat package processing method and device
CN104536827A (en) * 2015-01-27 2015-04-22 浪潮(北京)电子信息产业有限公司 Data dispatching method and device
CN105426252A (en) * 2015-12-17 2016-03-23 浪潮(北京)电子信息产业有限公司 Thread distribution method and system of distributed type file system
CN105468451A (en) * 2014-08-19 2016-04-06 复旦大学 Job scheduling system of computer cluster on the basis of high-throughput sequencing data
CN106034144A (en) * 2015-03-12 2016-10-19 中国人民解放军国防科学技术大学 Load-balancing-based virtual asset data storage method
CN106095590A (en) * 2016-07-21 2016-11-09 联动优势科技有限公司 A kind of method for allocating tasks based on thread pool and device
CN106406845A (en) * 2015-08-03 2017-02-15 阿里巴巴集团控股有限公司 A task processing method and device
CN109815014A (en) * 2019-01-17 2019-05-28 北京三快在线科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN109933415A (en) * 2017-12-19 2019-06-25 中国移动通信集团河北有限公司 Processing method, device, equipment and the medium of data
CN110019339A (en) * 2017-11-20 2019-07-16 北京京东尚科信息技术有限公司 A kind of data query method and system
CN110162392A (en) * 2019-05-29 2019-08-23 北京达佳互联信息技术有限公司 Execution method, apparatus, electronic equipment and the storage medium of periodic task
CN111651866A (en) * 2020-05-12 2020-09-11 北京华如科技股份有限公司 Simulation execution method and system based on dynamic load migration and time synchronization
CN112905347A (en) * 2021-03-04 2021-06-04 北京澎思科技有限公司 Data processing method, device and storage medium
WO2021208786A1 (en) * 2020-04-13 2021-10-21 华为技术有限公司 Thread management method and apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491927A (en) * 2016-06-13 2017-12-19 阿里巴巴集团控股有限公司 The distribution method and device of a kind of working time

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629220A (en) * 2012-03-08 2012-08-08 北京神州数码思特奇信息技术股份有限公司 Dynamic task allocation and management method
CN104166589A (en) * 2013-05-17 2014-11-26 阿里巴巴集团控股有限公司 Heartbeat package processing method and device
CN103455377B (en) * 2013-08-06 2019-01-22 北京京东尚科信息技术有限公司 System and method for management business thread pool
CN103455377A (en) * 2013-08-06 2013-12-18 北京京东尚科信息技术有限公司 System and method for managing business thread pool
CN103473138A (en) * 2013-09-18 2013-12-25 柳州市博源环科科技有限公司 Multi-tasking queue scheduling method based on thread pool
CN105468451A (en) * 2014-08-19 2016-04-06 复旦大学 Job scheduling system of computer cluster on the basis of high-throughput sequencing data
CN104536827A (en) * 2015-01-27 2015-04-22 浪潮(北京)电子信息产业有限公司 Data dispatching method and device
CN106034144A (en) * 2015-03-12 2016-10-19 中国人民解放军国防科学技术大学 Load-balancing-based virtual asset data storage method
CN106034144B (en) * 2015-03-12 2019-10-15 中国人民解放军国防科学技术大学 A kind of fictitious assets date storage method based on load balancing
CN106406845A (en) * 2015-08-03 2017-02-15 阿里巴巴集团控股有限公司 A task processing method and device
CN105426252A (en) * 2015-12-17 2016-03-23 浪潮(北京)电子信息产业有限公司 Thread distribution method and system of distributed type file system
CN106095590B (en) * 2016-07-21 2019-05-03 联动优势科技有限公司 A kind of method for allocating tasks and device based on thread pool
CN106095590A (en) * 2016-07-21 2016-11-09 联动优势科技有限公司 A kind of method for allocating tasks based on thread pool and device
CN110019339A (en) * 2017-11-20 2019-07-16 北京京东尚科信息技术有限公司 A kind of data query method and system
CN109933415A (en) * 2017-12-19 2019-06-25 中国移动通信集团河北有限公司 Processing method, device, equipment and the medium of data
CN109815014A (en) * 2019-01-17 2019-05-28 北京三快在线科技有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN110162392A (en) * 2019-05-29 2019-08-23 北京达佳互联信息技术有限公司 Execution method, apparatus, electronic equipment and the storage medium of periodic task
WO2021208786A1 (en) * 2020-04-13 2021-10-21 华为技术有限公司 Thread management method and apparatus
CN111651866A (en) * 2020-05-12 2020-09-11 北京华如科技股份有限公司 Simulation execution method and system based on dynamic load migration and time synchronization
CN111651866B (en) * 2020-05-12 2023-03-17 北京华如科技股份有限公司 Simulation execution method and system based on dynamic load migration and time synchronization
CN112905347A (en) * 2021-03-04 2021-06-04 北京澎思科技有限公司 Data processing method, device and storage medium

Also Published As

Publication number Publication date
CN102122252B (en) 2013-06-19

Similar Documents

Publication Publication Date Title
CN102122252B (en) Method for designing thread pool capable of ensuring temporal succession
Garraghan et al. An analysis of the server characteristics and resource utilization in google cloud
CN103365710B (en) Real-time task scheduling device and method and computer system
CN105095327A (en) Distributed ELT system and scheduling method
US9424212B2 (en) Operating system-managed interrupt steering in multiprocessor systems
CN103955398B (en) Virtual machine coexisting scheduling method based on processor performance monitoring
US10642652B2 (en) Best trade-off point on an elbow curve for optimal resource provisioning and performance efficiency
CN103745225A (en) Method and system for training distributed CTR (Click To Rate) prediction model
Guerrini The Ramsey model with AK technology and a bounded population growth rate
CN115239173A (en) Scheduling plan generation method and device, electronic equipment and storage medium
CN102768637A (en) Method and device for controlling test execution
CN103677990A (en) Virtual machine real-time task scheduling method and device and virtual machine
CN105824687B (en) A kind of method and device of Java Virtual Machine performance automated tuning
CN113032093B (en) Distributed computing method, device and platform
CN107370783A (en) A kind of dispatching method and device of cloud computing cluster resource
CN103325012A (en) Parallel computing dynamic task distribution method applicable to grid security correction
Wang et al. In stechah: An autoscaling scheme for hadoop in the private cloud
CN103530742B (en) Improve the method and device of scheduling arithmetic speed
CN115525797A (en) Database data query method, device, equipment and storage medium
Li et al. A strategy game system for QoS-efficient dynamic virtual machine consolidation in data centers
CN115309507A (en) Method, device, equipment and medium for calculating CPU resource occupancy rate
CN103440533B (en) The confining method of the non-bottleneck ability of job shop under a kind of cloud manufacturing mode
Wang et al. Slo-driven task scheduling in mapreduce environments
Hasan et al. GPaaScaler: Green energy aware platform scaler for interactive cloud application
CN102081778A (en) General method for calculating sales rewards

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130619

Termination date: 20140314