CN100562854C - The implementation method of load equalization of multicore processor operating system - Google Patents
The implementation method of load equalization of multicore processor operating system Download PDFInfo
- Publication number
- CN100562854C CN100562854C CNB2008100611349A CN200810061134A CN100562854C CN 100562854 C CN100562854 C CN 100562854C CN B2008100611349 A CNB2008100611349 A CN B2008100611349A CN 200810061134 A CN200810061134 A CN 200810061134A CN 100562854 C CN100562854 C CN 100562854C
- Authority
- CN
- China
- Prior art keywords
- load
- processor core
- processor
- thread
- load balancing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000007689 inspection Methods 0.000 claims description 16
- 238000013508 migration Methods 0.000 claims description 9
- 230000005012 migration Effects 0.000 claims description 9
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 239000012467 final product Substances 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Landscapes
- Multi Processors (AREA)
Abstract
The invention discloses a kind of implementation method of load equalization of multicore processor operating system.Be when multicore processor operating system is moved, loading condition is detected, and thread is distributed according to the loading condition that detects.This method realizes the equilibrium of multicore processor operating system load, thereby under the multithread programs that moves on the multicore processor operating system can the scheduling in operating system, being distributed on the different processor cores of multithreading equilibrium, thus the execution efficient that multiprocessor is examined improved.
Description
Technical field
The present invention relates to the multicore processor operating system technology, particularly relate to a kind of implementation method of load equalization of multicore processor operating system.
Background technology
Moore's Law decades occurred, but constantly dwindling along with the integrated circuit transistor size in recent years, be difficult to refill more crystal element down in the die size the inside, the complexity of integrated circuit can not be improved significantly, and then is indicating that processor performance can not get increasing substantially.On the other hand, the frequency of processor has been difficult to improve to a bottleneck (Pentium 4 has been up to 3.8GHZ, does not reach the 4GHz of expection) again, even can improve running frequency, the power problems that is brought can not solve.Therefore, in order to promote performance, be that the hardware development merchant of representative begins to be conceived to polycaryon processor and (also claims single-chip multiprocessing framework, Chip Multi-Processors, exploitation CMP) with Intel, AMD, IBM.
The appearance of new architecture must have suitable its more performance of software coupling competence exertion.The basic thought that the multi-core system structural behaviour promotes is that task is carried out suitable decomposition, makes task parallel simultaneously on a plurality of processors.Therefore, parallel computation is the characteristics of multi-core system structure maximum.The main bottleneck that the division of task at present, multithreading are carried out realization because the multithreading of multi-core system structural requirement singly is not to realize multithreading on software degree, and will be realized multithreading on hardware view on software.
How operating system allows operating system bring into play the performance of multinuclear better as contacting the closest software with hardware, is a focus of studying at present.Co-ordination between the multiprocessing, concurrent degree reach as far as possible greatly, depend on scheduling and the distribution of operating system dispatcher to task to a great extent.
As everyone knows, the task scheduling of operating system comprises the scheduling to real-time task, interactivity task and backstage batch processing task.The algorithm of scheduling can be based on priority, round-robin, task preemption etc.The subject matter that scheduling solves is how to reach the handling capacity of the utilization of resource fullest and system's maximum and spend the least possible scheduling time.
Multiprocessor has proposed new scheduling problem compared to single core processor: load balancing and Task Distribution.Load balancing refers to allow as far as possible the resource of occupying that all processors can both be balanced, to reach maximum system throughput; Task Distribution refer to system the task reasonable distribution to each processor core, to reach the equilibrium of workload between the processor.Considering under many different multi-core system structures, exploring a kind of suitable polycaryon processor scheduling and seem particularly important.
The difference of multiple nucleus system and monokaryon system maximum is the concurrency of multiple nucleus system.All processors can both reach maximum throughput and resource peak use rate as far as possible in the concurrency requirement system.Therefore, we wish that each processor can balancedly execute the task, and load is identical.
Load balancing is the new problem that multinuclear operating system proposes.In monokaryon operating system, have only a nuclear, do not need to consider load balancing.Multiple nucleus system will reach best execution performance as far as possible, task need be assigned on each processor core equably, refer to here evenly, not merely be task quantity evenly, also comprise even to system resource access, the execution time is even.The target of task reasonable distribution is a load balancing.From the narrow sense angle that task is carried out, the time length of task run, the opportunity of access resources, Request System interruption etc. all are unpredictalbe, and task executions is dynamic.Therefore Task Distribution can not be unilaterally considered in the equilibrium of load, when in the system between the processor load take place when unbalanced, mobile equilibrium in the time of need doing migration as operation to task is to reach the purpose of each processor load balancing.
Reasonably solve these two new problems of load balancing of processor, can carry out more effective and reasonable utilization, task is carried out obtained fastest response resource.
Summary of the invention
The object of the present invention is to provide a kind of implementation method of load equalization of multicore processor operating system.
The technical scheme that the present invention solves its technical matters employing is as follows:
1) dispatching zone makes up:
In the initialized process of processor core, visit each processor core; The processor core of sharing L2 cache is divided in the middle of the same dispatching zone; Like this, just can form several different dispatching zones;
2) load vector is calculated:
Use resource utilization and the operation queue length factor as the computational load vector, use the utilization factor FCPU of formula (1) computation processor nuclear, wherein Tused is the processor calculating time, and Tidle is processor free time,
FCPU=Tused/(Tidle+Tused)(1)
Use formula (2) computational load vector Fload, wherein FCPU is the utilization factor of processor core, utilizes formula (1) to calculate, and Frun_queue is the length of processor core operation queue;
Fload=(FCPU+1)*Frun_queue (2)
3) load balancing detects:
For a dispatching zone Pset={P1 of processor core, P2 ..., Pn}, P1 wherein, P2, ..., Pn is the processor core among the dispatching zone Pset, can go to detect the situation whether this processor core and other processor cores have load imbalance for the processor core Pi among the Pset; Each processor core all has the load inspection of oneself, and the time of load inspection occurs in thread distribution, processor free time and Fixed Time Interval;
The load balancing checking process is as follows:
The first step, Pi detects and its processor core Pj in same dispatching zone, Pj+1 ..., Pj+k if load is unbalanced, then returns load vector and differs maximum processor core P and the positive and negative sign W of Pi load vector difference, checks and finishes;
In second step, if the processor core load balancing in the same dispatching zone then goes to check other with the load in the layer dispatching zone, the inspection of layer dispatching zone only need check that wherein any one processor P m gets final product together; When load is unbalanced, return unbalanced processor core P of first charge capacity and the positive and negative sign W of Pi load difference;
4) thread distributes:
After thread Tnew produced, allocation flow was as follows:
When the state of thread Tnew for after can moving, call the detection load balancing of father's thread Tparent place processor core Pparent, if load balancing, then this thread is entered in the operation queue of processor core at parent process place; Otherwise this thread is inserted in the operation queue of processor core Pload_least of load minimum;
Dynamic load leveling when 5) moving:
Set Pset for processor core belongs to Pset for any Pi, all has independently to check the load balancing strategy.Here identical in the load balancing inspection policy of Cai Yonging and the step 3):
If Pi has the thread operation, load detecting is called every regular time at interval by Pi; If the Pi free time then reduces time interval number, few time interval is detected to try one's best.If all processor cores are all idle, then adjust the time interval number of checking load balancing;
Processor core Pi finds load balancing, and the load balancing inspection policy can be returned and unbalanced processor P t of Pi load and load magnitude relationship fiducial value W; If W>0, the charge capacity of Pi are less than Pt, the thread among the needs migration Pi in the part ready queue is in the ready queue of Pt; If W<0 then needs the ready queue from Pt to move the part thread in Pi, to reach load balancing; If during W=0, load is balanced, does not need to move the thread formation.
The present invention compares with background technology, and the useful effect that has is:
The present invention is a kind of load-balancing method towards multicore processor operating system, its major function is by making up dispatching zone, between dispatching zone inside and dispatching zone, carry out the equilibrium of load, thereby under the multithread programs that moves on the multicore processor operating system can the scheduling in operating system, being distributed on the different processor cores of multithreading equilibrium, thus the execution efficient that multiprocessor is examined improved.
(1) high efficiency.By operating system equilibrium is carried out in load, made a plurality of threads balanced being distributed on a plurality of processor cores to move, improved operational efficiency.
(2) practicality.Load balancing can improve the degree of parallelism of thread operation, reduces thread migration, through the repetition test proof good practicability is arranged.
Description of drawings
Fig. 1 is an implementation process synoptic diagram of the present invention;
Fig. 2 is the synoptic diagram that four nuclears, two road scheduling of multiprocessor territories make up;
Fig. 3 is that the thread of four nuclears No. two multiprocessor load balancing distributes synoptic diagram;
Fig. 4 is that the unbalanced thread of four nuclears, two road scheduling of multiprocessor territory internal burdens distributes synoptic diagram;
Fig. 5 is that the unbalanced thread of load distributes synoptic diagram between four nuclears, two road scheduling of multiprocessor territories.
Embodiment
The present invention is a kind of implementation method of load equalization of multicore processor operating system, below in conjunction with Fig. 1 its specific implementation process is described.
1) dispatching zone makes up:
Usually said thread is meant the Lightweight Process of shared resource, and in the modern operating system scheduling, thread is the base unit of task scheduling.Tiao Du base unit is a thread in the present invention; And load is meant the thread that operates on the different processor cores.Polycaryon processor has three typical characteristics: share between second level cache, the processor core between multiprocessor nuclear, processor core and can pass through the register direct communication.On such processor, on-chip cache is that each processor core is privately owned.
Dispatching zone is the set that charge capacity need reach the processor core of balance.For the processor core of sharing second level cache, when thread moved between the processor of sharing L2 cache, the second level cache mismatch ratio cost that is taken place was consistent with the second level cache mismatch ratio cost of not carrying out task immigration.The structure of dispatching zone is that the processor core that will share L2 cache is divided in the same dispatching zone.Dispatching zone is a grade layered structure.Top (dispatching zone of n level, if having n layer dispatching zone) comprises all processor cores, and the dispatching zone of the bottom (the 0th grade, basic unit's dispatching zone) represent to dispatch in the closest processor core of load relationship.If two processors in same dispatching zone, need carry out load balancing.If father and son, ancestors or brotherhood are arranged between the dispatching zone, the processor nuclear energy between the dispatching zone carries out load balancing so.Fig. 2 is an example with four nuclears No. two multiprocessors, and the structure of dispatching zone is described.Processor core 0 and processor core 1 are basic unit's dispatching zone, and processor core 2 and processor core 3 also are basic unit's dispatching zone.Two basic unit's dispatching zones constitute the last layer dispatching zone jointly.
Each processor core all can be assigned to a logic ID when starting, these logic ID increase progressively since 0.In the initialized process of processor core, visit each processor core.The processor core of sharing L2 cache is divided in the middle of the same dispatching zone.Like this, just can form several different dispatching zones.
2) load vector is calculated:
Load vector is meant carries out duty factor yardstick.For load balancing is effectively assessed, need the working load vector.Load vector is defined as the base unit that decision processor is examined load.
The present invention uses resource utilization and the operation queue length factor as the computational load vector.
Formula (1) has provided the utilization factor FCPU computing formula of processor core, and wherein Tused is the processor calculating time, and Tidle is processor free time.
FCPU=Tused/(Tidle+Tused)(1)
Formula (2) has provided the account form of load vector among the present invention, and wherein FCPU is a processor ground utilization factor, utilizes formula (1) to calculate, and Frun_queue is the length of processor core operation queue.
Fload=(FCPU+1)*Frun_queue (2)
3) load balancing detects
Load detecting is meant operating system checks between the processor core whether have laod unbalance.The inspection of load balancing is that the multinuclear operating system scheduling is realized crucial part in the load balancing.For a dispatching zone Pset={P1 of processor core, P2 ..., Pn}, P1 wherein, P2, ..., Pn is the processor core among the dispatching zone Pset, can go to detect the situation whether this processor core and other processor cores have load imbalance for the processor core Pi among the Pset.
Each processor core all has the load inspection of oneself.The time of load inspection occurs in thread distribution, processor free time and Fixed Time Interval.
The load balancing checking process is as follows:
(1) Pi detects and its processor core Pj in same dispatching zone, Pj+1 ..., Pj+k if load is unbalanced, then returns load vector and differs maximum processor core P and the positive and negative sign W of Pi load vector difference.Check and finish.
(2) if the processor core load balancing in the same dispatching zone goes then to check that other are with the load in the layer dispatching zone.Inspection with layer dispatching zone only need check that wherein any one processor P m gets final product.When load is unbalanced, return unbalanced processor core P of first charge capacity and the positive and negative sign W of Pi load difference.
For charge capacity relatively, need use load vector noted earlier.Prescribed threshold M, to the processor core in the same dispatching zone, if the load vector difference is less than threshold values aM (a<1), load balancing then, on the contrary load is unbalanced; Processor in the different dispatching zones is got threshold values M and is compared.Wherein the selection of threshold values M and factor a and second level cache hit mismatch, scheduling queue task transfers time, scheduler schedules time etc. relation, can set according to applied environment in use.
4) thread distributes:
Thread distributes and refers to that thread balancedly is assigned on each processor core.When new thread produced, if processor core load balancing condition is set up, thread can be paid the utmost attention to and continue to carry out on the processor core of father's thread execution.Therefore, keep cpu_mask at the task descriptor of thread, be used to identify the set of the processor core that certain thread can move, limited the executable processor of thread, this value back thread is set can only be carried out in the set of cpu_mask predetermined process device nuclear, reaches the static balancing of load.
After thread Tnew produces, allocation flow is as follows: after the state of thread Tnew is to move (runnable), call the detection load balancing of father's thread Tparent place processor core Pparent, if load balancing, then this thread is entered in the operation queue of processor core at parent process place.Otherwise this thread is inserted in the operation queue of processor core Pload_least of load minimum.
Be example with four nuclears, two road polycaryon processors below, the thread allocation strategy is described.Processor core 0 and processor core 1 coexist in the dispatching zone, and processor core 2 and processor core 3 are at same dispatching zone.Father's thread of new thread is ready at processor core 2.Load balance among Fig. 3, Tnew are assigned in the processor core 1; Dispatching zone internal burden imbalance in Fig. 4, Tnew is assigned in the processor core 0; The equilibrium of Fig. 5 dispatching zone internal burden, but laod unbalance between dispatching zone, Tnew is assigned in the processor core 2.
In modern operating system, the speed that thread produces is very fast.If all go to detect load balancing when each thread produces, this cost loses more than gain.Each processor core goes to detect load balancing at regular intervals at interval.In a period of time, thread all is assigned in the same processor core.The cost problem that more effective like this solution load balancing detects.
Dynamic load leveling when 5) moving:
Thread can be because various inadequate resources, user's interruption, operation exception, the thread state that needs communication enter waiting list for situation such as moving at the state in when operation, and the operating loss of skipping leaf of thread can cause thread waits.The condition that various threads can not normally continue to move is unpredictable, so be dynamically changeable the excess time of thread operation.Therefore it is not enough having only thread to distribute the load balancing of keeping between the processor, need also accomplish dynamic load balancing when thread moves.The realization of dynamic load leveling is mainly realized by the thread migration between the processor core during operation.
Polycaryon processor is shared L2 cache, and thread moves between the processor core of same dispatching zone, and cost is little more a lot of than the migration mismatch cost between different dispatching zones.
Set Pset for processor core belongs to Pset for any Pi, all has independently to check the load balancing strategy.Here identical in the load balancing inspection policy of Cai Yonging and the step 3).If Pi has the thread operation, load detecting is called every regular time at interval by Pi; If the Pi free time then reduces time interval number, few time interval is detected to try one's best.If all processor cores are all idle, then adjust the time interval number of checking load balancing.
Processor core Pi finds load balancing, and the load balancing inspection policy can be returned and unbalanced processor P t of Pi load and load magnitude relationship fiducial value W.If W>0, the charge capacity of Pi are less than Pt, the thread among the needs migration Pi in the part ready queue is in the ready queue of Pt; If W<0 then needs the ready queue from Pt to move the part thread in Pi, to reach load balancing.If during W=0, load is balanced, does not need to move the thread formation.
Because several factors, a lot of threads can not move during as thread migration, might not reach load balance.So need continue to do balancing dynamic load to other unbalanced processors, up to load balance.
Allow single processor core Pi detect load balancing alone, the balance target is that Pi and its charge capacity differ the load between the maximum processor core.Because each processor core all can carry out load balancing to differing maximum processor with its charge capacity, so the balancing dynamic load of each processor core can reach the load balance of the overall situation.
When Pi moved, when selecteed thread Tselected met the following conditions, thread was not done migration to thread from processor P t.
(1) thread Tselected just carries out in target processor nuclear.
(2) thread Tselected is in the hot hit condition of cache, and promptly current thread had been used in the time period recently.
(3) do not comprise processor P i in the set of the processor core shown in the cpu_mask of thread Tselected, then thread Tselected can not be moved by Pi.
Claims (1)
1. the implementation method of a load equalization of multicore processor operating system is characterized in that:
1) dispatching zone makes up:
In the initialized process of processor core, visit each processor core; The processor core of sharing L2 cache is divided in the middle of the same dispatching zone; Like this, just can form several different dispatching zones;
2) load vector is calculated:
Use resource utilization and the operation queue length factor as the computational load vector, use the utilization factor FCPU of formula (1) computation processor nuclear, wherein Tused is the processor calculating time, and Tidle is processor free time,
FCPU=Tused/(Tidle+Tused) (1)
Use formula (2) computational load vector Fload, wherein FCPU is the utilization factor of processor core, utilizes formula (1) to calculate, and Frun_queue is the length of processor core operation queue;
Fload=(FCPU+1)*Frun_queue (2)
3) load balancing detects:
For a dispatching zone Pset={P1 of processor core, P2 ..., Pn}, P1 wherein, P2, ..., Pn is the processor core among the dispatching zone Pset, can go to detect the situation whether this processor core and other processor cores have load imbalance for the processor core Pi among the Pset; Each processor core all has the load inspection of oneself, and the time of load inspection occurs in thread distribution, processor free time and Fixed Time Interval;
The load balancing checking process is as follows:
The first step, Pi detects and its processor core Pj in same dispatching zone, Pj+1 ..., Pj+k if load is unbalanced, then returns load vector and differs maximum processor core P and the positive and negative sign W of Pi load vector difference, checks and finishes;
In second step, if the processor core load balancing in the same dispatching zone then goes to check other with the load in the layer dispatching zone, the inspection of layer dispatching zone only need check that wherein any one processor P m gets final product together; When load is unbalanced, return unbalanced processor core P of first charge capacity and the positive and negative sign W of Pi load difference;
4) thread distributes:
After thread Tnew produced, allocation flow was as follows:
When the state of thread Tnew for after can moving, call the detection load balancing of father's thread Tparent place processor core Pparent, if load balancing, then this thread is entered in the operation queue of processor core at parent process place; Otherwise this thread is inserted in the operation queue of processor core Pload_least of load minimum;
Dynamic load leveling when 5) moving:
Set Pset for processor core belongs to Pset for any Pi, all has independently to check the load balancing strategy, and is identical in the load balancing inspection policy of Cai Yonging and the step 3) here;
If Pi has the thread operation, load detecting is called every regular time at interval by Pi; If the Pi free time, then reduce time interval number, detect with few time interval of trying one's best, if all processor cores are all idle, then adjust the time interval number of checking load balancing;
Processor core Pi finds load balancing, and the load balancing inspection policy can be returned and unbalanced processor P t of Pi load and load magnitude relationship fiducial value W; If W>0, the charge capacity of Pi are less than Pt, the thread among the needs migration Pi in the part ready queue is in the ready queue of Pt; If W<0 then needs the ready queue from Pt to move the part thread in Pi, to reach load balancing; If during W=0, load is balanced, does not need to move the thread formation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2008100611349A CN100562854C (en) | 2008-03-11 | 2008-03-11 | The implementation method of load equalization of multicore processor operating system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2008100611349A CN100562854C (en) | 2008-03-11 | 2008-03-11 | The implementation method of load equalization of multicore processor operating system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101256515A CN101256515A (en) | 2008-09-03 |
CN100562854C true CN100562854C (en) | 2009-11-25 |
Family
ID=39891358
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2008100611349A Expired - Fee Related CN100562854C (en) | 2008-03-11 | 2008-03-11 | The implementation method of load equalization of multicore processor operating system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100562854C (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011150792A1 (en) * | 2010-11-29 | 2011-12-08 | 华为技术有限公司 | Power saving realization method and device for cpu |
CN106293935A (en) * | 2016-07-28 | 2017-01-04 | 张升泽 | Electric current is in the how interval distribution method within multi core chip and system |
Families Citing this family (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101394362B (en) * | 2008-11-12 | 2010-12-22 | 清华大学 | Method for load balance to multi-core network processor based on flow fragmentation |
CN101739286B (en) * | 2008-11-19 | 2012-12-12 | 英业达股份有限公司 | Method for balancing load of storage server with a plurality of processors |
CN101782862B (en) * | 2009-01-16 | 2013-03-13 | 鸿富锦精密工业(深圳)有限公司 | Processor distribution control system and control method thereof |
CN101504618B (en) * | 2009-02-26 | 2011-04-27 | 浙江大学 | Multi-core processor oriented real-time thread migration method |
US8788570B2 (en) * | 2009-06-22 | 2014-07-22 | Citrix Systems, Inc. | Systems and methods for retaining source IP in a load balancing multi-core environment |
US20110022870A1 (en) * | 2009-07-21 | 2011-01-27 | Microsoft Corporation | Component power monitoring and workload optimization |
JP5541355B2 (en) * | 2010-03-18 | 2014-07-09 | 富士通株式会社 | Multi-core processor system, arbitration circuit control method, control method, and arbitration circuit control program |
WO2011117987A1 (en) * | 2010-03-24 | 2011-09-29 | 富士通株式会社 | Multi-core system and start-up method |
CN102455944A (en) * | 2010-10-29 | 2012-05-16 | 迈普通信技术股份有限公司 | Multi-core load balancing method and processor thereof |
CN102156659A (en) * | 2011-03-28 | 2011-08-17 | 中国人民解放军国防科学技术大学 | Scheduling method and system for job task of file |
CN102521047B (en) * | 2011-11-15 | 2014-07-09 | 重庆邮电大学 | Method for realizing interrupted load balance among multi-core processors |
CN103197977B (en) * | 2011-11-16 | 2016-09-28 | 华为技术有限公司 | A kind of thread scheduling method, thread scheduling device and multi-core processor system |
TWI439925B (en) * | 2011-12-01 | 2014-06-01 | Inst Information Industry | Embedded systems and methods for threads and buffer management thereof |
CN102546946B (en) * | 2012-01-05 | 2014-04-23 | 中国联合网络通信集团有限公司 | Method and device for processing task on mobile terminal |
CN103297767B (en) * | 2012-02-28 | 2016-03-16 | 三星电子(中国)研发中心 | A kind of jpeg image coding/decoding method and decoder being applicable to multinuclear embedded platform |
CN102609307A (en) * | 2012-03-07 | 2012-07-25 | 汉柏科技有限公司 | Multi-core multi-thread dual-operating system network equipment and control method thereof |
CN102629217B (en) * | 2012-03-07 | 2015-04-22 | 汉柏科技有限公司 | Network equipment with multi-process multi-operation system and control method thereof |
CN102681889B (en) * | 2012-04-27 | 2015-01-07 | 电子科技大学 | Scheduling method of cloud computing open platform |
CN104239149B (en) * | 2012-08-31 | 2017-03-29 | 南京工业职业技术学院 | A kind of service end multi-threaded parallel data processing method and load-balancing method |
CN102866922B (en) * | 2012-08-31 | 2014-10-22 | 河海大学 | Load balancing method used in massive data multithread parallel processing |
CN102929718B (en) * | 2012-09-17 | 2015-03-11 | 厦门坤诺物联科技有限公司 | Distributed GPU (graphics processing unit) computer system based on task scheduling |
CN102929772A (en) * | 2012-10-16 | 2013-02-13 | 苏州迈科网络安全技术股份有限公司 | Monitoring method and system of intelligent real-time system |
CN103793270B (en) * | 2012-10-26 | 2018-09-07 | 百度在线网络技术(北京)有限公司 | Moving method, device and the terminal of end application |
CN104219161B (en) * | 2013-06-04 | 2017-09-05 | 华为技术有限公司 | A kind of method and device of balance nodes load |
CN103530191B (en) * | 2013-10-18 | 2017-09-12 | 杭州华为数字技术有限公司 | Focus identifying processing method and device |
CN103617086B (en) * | 2013-11-20 | 2017-02-08 | 东软集团股份有限公司 | Parallel computation method and system |
CN105009083A (en) * | 2013-12-19 | 2015-10-28 | 华为技术有限公司 | Method and device for scheduling application process |
JP5808450B1 (en) * | 2014-04-04 | 2015-11-10 | ファナック株式会社 | Control device for executing sequential program using multi-core processor |
CN103927225B (en) * | 2014-04-22 | 2018-04-10 | 浪潮电子信息产业股份有限公司 | A kind of internet information processing optimization method of multi-core framework |
CN104239153B (en) * | 2014-09-29 | 2018-09-11 | 三星电子(中国)研发中心 | The method and apparatus of multi-core CPU load balancing |
CN105700951B (en) * | 2014-11-25 | 2021-01-26 | 中兴通讯股份有限公司 | Method and device for realizing CPU service migration |
CN104506452B (en) * | 2014-12-16 | 2017-12-26 | 福建星网锐捷网络有限公司 | A kind of message processing method and device |
CN106033374A (en) * | 2015-03-13 | 2016-10-19 | 西安酷派软件科技有限公司 | Method and device for distributing multi-core central processing unit in multisystem, and terminal |
CN104978235A (en) * | 2015-06-30 | 2015-10-14 | 柏斯红 | Operating frequency prediction based load balancing method |
CN106371914A (en) * | 2015-07-23 | 2017-02-01 | 中国科学院声学研究所 | Load intensity-based multi-core task scheduling method and system |
US20170039093A1 (en) * | 2015-08-04 | 2017-02-09 | Futurewei Technologies, Inc. | Core load knowledge for elastic load balancing of threads |
CN106487606A (en) * | 2015-08-28 | 2017-03-08 | 阿里巴巴集团控股有限公司 | A kind of dispatching method for network tester and system |
CN106933673B (en) * | 2015-12-30 | 2020-11-27 | 阿里巴巴集团控股有限公司 | Method and device for adjusting number of logical threads of component |
CN105700959B (en) * | 2016-01-13 | 2019-02-26 | 南京邮电大学 | A kind of multithreading division and static equilibrium dispatching method towards multi-core platform |
CN107368615A (en) * | 2016-05-11 | 2017-11-21 | 中国科学院微电子研究所 | A kind of characteristic parameter extraction method and device |
CN110109755B (en) * | 2016-05-17 | 2023-07-07 | 青岛海信移动通信技术有限公司 | Process scheduling method and device |
WO2018018372A1 (en) * | 2016-07-25 | 2018-02-01 | 张升泽 | Method and system for calculating current in electronic chip |
WO2018018373A1 (en) * | 2016-07-25 | 2018-02-01 | 张升泽 | Power calculation method and system for multiple core chips |
WO2018018425A1 (en) * | 2016-07-26 | 2018-02-01 | 张升泽 | Method and system for allocating threads of multi-kernel chip |
CN106227602A (en) * | 2016-07-26 | 2016-12-14 | 张升泽 | The distribution method being supported between multi core chip and system |
WO2018018424A1 (en) * | 2016-07-26 | 2018-02-01 | 张升泽 | Temperature control method and system based on chip |
WO2018018452A1 (en) * | 2016-07-27 | 2018-02-01 | 李媛媛 | Load balance application method and system in multi-core chip |
CN107797853B (en) * | 2016-09-07 | 2020-09-08 | 深圳市中兴微电子技术有限公司 | Task scheduling method and device and multi-core processor |
US10387207B2 (en) * | 2016-12-06 | 2019-08-20 | International Business Machines Corporation | Data processing |
CN106775975B (en) * | 2016-12-08 | 2020-02-14 | 青岛海信移动通信技术股份有限公司 | Process scheduling method and device |
CN108549574B (en) * | 2018-03-12 | 2022-03-15 | 深圳市万普拉斯科技有限公司 | Thread scheduling management method and device, computer equipment and storage medium |
CN108845882B (en) * | 2018-06-07 | 2022-03-01 | 网宿科技股份有限公司 | Method and device for realizing CPU load balance based on transcoding task scheduling |
CN108897622A (en) * | 2018-06-29 | 2018-11-27 | 郑州云海信息技术有限公司 | A kind of dispatching method and relevant apparatus of task run |
CN109298919B (en) * | 2018-08-27 | 2021-09-07 | 西安工业大学 | Multi-core scheduling method of soft real-time system for high-utilization-rate task set |
CN111831409B (en) * | 2020-07-01 | 2022-07-15 | Oppo广东移动通信有限公司 | Thread scheduling method and device, storage medium and electronic equipment |
CN112328542A (en) * | 2020-11-25 | 2021-02-05 | 天津凯发电气股份有限公司 | Method for importing data in heterogeneous data file into database |
CN112559176A (en) * | 2020-12-11 | 2021-03-26 | 广州橙行智动汽车科技有限公司 | Instruction processing method and device |
CN113626190B (en) * | 2021-08-04 | 2022-10-18 | 电子科技大学 | Load balancing method in microkernel operating system facing multi-kernel environment |
CN115168058B (en) * | 2022-09-06 | 2022-11-25 | 深流微智能科技(深圳)有限公司 | Thread load balancing method, device, equipment and storage medium |
-
2008
- 2008-03-11 CN CNB2008100611349A patent/CN100562854C/en not_active Expired - Fee Related
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011150792A1 (en) * | 2010-11-29 | 2011-12-08 | 华为技术有限公司 | Power saving realization method and device for cpu |
US9377842B2 (en) | 2010-11-29 | 2016-06-28 | Huawei Technologies Co., Ltd. | Method and apparatus for realizing CPU power conservation |
CN106293935A (en) * | 2016-07-28 | 2017-01-04 | 张升泽 | Electric current is in the how interval distribution method within multi core chip and system |
Also Published As
Publication number | Publication date |
---|---|
CN101256515A (en) | 2008-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN100562854C (en) | The implementation method of load equalization of multicore processor operating system | |
Leis et al. | Morsel-driven parallelism: a NUMA-aware query evaluation framework for the many-core age | |
US10360063B2 (en) | Proactive resource management for parallel work-stealing processing systems | |
US8707314B2 (en) | Scheduling compute kernel workgroups to heterogeneous processors based on historical processor execution times and utilizations | |
CN102521047B (en) | Method for realizing interrupted load balance among multi-core processors | |
Chen et al. | WATS: Workload-aware task scheduling in asymmetric multi-core architectures | |
Wang et al. | Efficient and fair multi-programming in GPUs via effective bandwidth management | |
Chen et al. | Adaptive workload-aware task scheduling for single-ISA asymmetric multicore architectures | |
Hofmeyr et al. | Juggle: proactive load balancing on multicore computers | |
Chen et al. | Improving GPGPU performance via cache locality aware thread block scheduling | |
Song et al. | Energy efficiency optimization in big data processing platform by improving resources utilization | |
KR101765830B1 (en) | Multi-core system and method for driving the same | |
Geng et al. | Dynamic load balancing scheduling model based on multi-core processor | |
Shih et al. | Fairness scheduler for virtual machines on heterogonous multi-core platforms | |
Huo et al. | An energy efficient task scheduling scheme for heterogeneous GPU-enhanced clusters | |
Zou et al. | Contention aware workload and resource co-scheduling on power-bounded systems | |
Liu et al. | Task scheduling of real-time systems on multi-core embedded processor | |
Yang et al. | Cache-aware task scheduling on multi-core architecture | |
David | Scheduling algorithms for asymmetric multi-core processors | |
Kale et al. | A user-defined schedule for OpenMP | |
Cao et al. | A task scheduling scheme for preventing temperature hotspot on GPU heterogeneous cluster | |
CN107577524A (en) | The GPGPU thread scheduling methods of non-memory access priority of task | |
Debattista et al. | Wait-free cache-affinity thread scheduling | |
Wang et al. | An approximate optimal solution to GPU workload scheduling | |
Bao et al. | Task scheduling of data-parallel applications on HSA platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20091125 Termination date: 20120311 |