CN105051689A - Method, apparatus and system for scheduling resource pool in multi-core system - Google Patents

Method, apparatus and system for scheduling resource pool in multi-core system Download PDF

Info

Publication number
CN105051689A
CN105051689A CN201380003199.7A CN201380003199A CN105051689A CN 105051689 A CN105051689 A CN 105051689A CN 201380003199 A CN201380003199 A CN 201380003199A CN 105051689 A CN105051689 A CN 105051689A
Authority
CN
China
Prior art keywords
task
subtask
shared queue
antenna level
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201380003199.7A
Other languages
Chinese (zh)
Inventor
吴素文
王吉滨
李琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN105051689A publication Critical patent/CN105051689A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method for scheduling a resource pool in a multi-core system comprises: dividing a task in the resource pool into multiple subtasks based on a preset granularity; adding the multiple subtasks into a shared queue; and triggering multiple cores to obtain and process the subtasks from the shared queue sequentially till all the subtasks in the shared queue are processed. Also provided are a corresponding apparatus and a system.

Description

Method, apparatus and system for scheduling resource pool in multi-core system
The dispatching method of resource pool, device and system technical field in a kind of multiple nucleus system
The present invention relates to communication technical field, and in particular to the dispatching method of resource pool, device and system in a kind of multiple nucleus system.Background technology
In multiple nucleus system, due to the limitation of each core disposal ability, the task of One function module is difficult in a core processing completely, it is necessary to which the collaboration that multinuclear carries out task as a resource pool is handled.Therefore need to carry out task distribution to multinuclear, to ensure that task can be completed in time, while making each core load more balanced.Wherein, the scheduling of resource pool is known as to the process that multinuclear carries out task distribution.
In existing scheme, it is general that there is a centralized dispatching module, also referred to as centralized dispatching core, the centralized dispatching module can be by the task in functional module according to certain granularity, distribute to different cores to be handled, each core can be with the task corresponding to the function of a part of granularity of processing function module.For example, after core A and core B processing completes the task corresponding to some functions, trigger centralized dispatching module, centralized dispatching module needs the situation of function to be processed according to core C and core D, the information such as shared situation of load when considering the situation of antenna simultaneously and being handled for different user, load precomputation is carried out, task distribution is then carried out to core C and core D according to the result of calculation obtained by the load precomputation, such as the distributing to core C of the task is:The antenna 0-3 of antenna level task and the user 0-9 of user-level task are handled, the distributing to core D of the task is:The antenna 4-7 of antenna level task and the user 10-15 of user-level task, etc. are handled, hereafter, core C and core D just can be handled according to the distributing to oneself of the task respectively.
In the research and practice process to prior art, it was found by the inventors of the present invention that due to centralized dispatching module in carry out task distribution, it is necessary to load required for estimating each business processing, and business load is related to multiple dimensions of business, it is difficult to estimation accurately, this may cause the task of each core to distribute uneven, and the load gap for ultimately resulting in each core is excessive, such as, some cores can be caused to overload, and some cores also have load surplus, etc..The content of the invention
The embodiment of the present invention provides the dispatching method of resource pool, device and system in a kind of multiple nucleus system, and more balancedly the task in resource pool can be scheduled.
In a first aspect, the embodiment of the present invention provides a kind of dispatching method of resource pool in multiple nucleus system, including: By the task in resource pool according to preset granularity division into multiple subtasks;
The multiple subtask is added in shared queue;
Trigger multiple cores to obtain subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
In the first possible embodiment, with reference in a first aspect, the task in the resource pool includes user-level task and antenna level task, then the task by resource pool includes according to preset granularity division into multiple subtasks:
By user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;
It is described to add the multiple subtask in shared queue specially:The user class subtask is added into user class shared queue, the antenna level subtask is added into antenna level shared queue.
In second of possible embodiment, in the first possible embodiment with reference to first aspect, the multiple cores of triggering obtain subtask and handled from shared queue successively, until the subtask in shared queue is all disposed, including:
Multiple cores are triggered to obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, the multiple core obtains user class subtask from user class shared queue successively and handled, until the user class subtask in user class shared queue is all disposed.
In the third possible embodiment, with reference in a first aspect, task in the resource pool includes user-level task and antenna level task, then the task by resource pool according to preset granularity division into before multiple subtasks, in addition to:
It is antenna level task or user-level task to determine the task in Current resource pond;
If antenna level task, then the task in the Current resource pond is directly distributed;If user-level task, then the step of performing the task in Current resource pond according to preset granularity division into multiple subtasks.
In the 4th kind of possible embodiment, with reference to the third possible embodiment of first aspect, the task in the Current resource pond is directly distributed, including:
Obtain the disposal ability information of each core;
Load needed for each antenna processing is estimated, estimation load is obtained; Task in the Current resource pond is distributed to by each core according to the disposal ability information of each core and estimation load.
In the 5th kind of possible embodiment, with reference to first aspect, the first, second and third of first aspect or four kind of possible embodiment, the shared queue is specially software shared queue or hardware shared queue.
Second aspect, the embodiment of the present invention also provides a kind of dispatching device of resource pool in multiple nucleus system, including division unit, adding device and trigger element;
Division unit, for by the task in resource pool according to preset granularity division into multiple subtasks;Adding device, multiple subtasks for the division unit to be obtained are added in shared queue;Trigger element, task processing is carried out for triggering multiple cores so that multiple cores of processing task obtain subtask and handled from shared queue successively, until the subtask in shared queue is disposed.
In the first possible embodiment, with reference to second aspect, the task in the resource pool includes user-level task and antenna level task, shellfish ' J:
The division unit, specifically for by user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;
The adding device, specifically for the user class subtask is added into user class shared queue, antenna level shared queue is added by the antenna level subtask.
In second of possible embodiment, in the first possible embodiment with reference to second aspect, the trigger element, antenna level subtask is obtained specifically for the multiple cores of triggering from antenna level shared queue successively and is handled, until after the antenna level subtask in antenna level shared queue is all disposed, the multiple core obtains user class subtask from user class shared queue successively and handled, until the user class subtask in user class shared queue is all disposed.
In the third possible embodiment, with reference to second aspect, the task in the resource pool includes user-level task and antenna level task, then the dispatching device of resource pool also includes judging unit and allocation unit in the multiple nucleus system;
Judging unit, for determining that the task in Current resource pond is antenna level task or user-level task;Allocation unit, for determining Current resource pond in judging unit in task when being antenna level task, the task in the Current resource pond is directly distributed;
The division unit, specifically for determining Current resource pond in judging unit in task when being user-level task, by the task in Current resource pond according to preset granularity division into multiple subtasks. In the 4th kind of possible embodiment, with reference to the third possible embodiment of second aspect, the allocation unit includes obtaining subelement, estimation subelement and distribution subelement;
Obtain subelement, the disposal ability information for obtaining each core;
Subelement is estimated, for estimating the load needed for each antenna processing, estimation load is obtained;Subelement is distributed, the estimation for being obtained according to the disposal ability information and the estimation subelement for obtaining each core that subelement is got is loaded distributes to each core by the task in the Current resource pond.
The third aspect, the embodiment of the present invention also provides a kind of dispatching device of resource pool in communication system, including any multiple nucleus system provided in an embodiment of the present invention.
Fourth aspect, the embodiment of the present invention also provides a kind of communication equipment of multiple nucleus system, including processor, the memory for data storage and program and the transceiver module for transceiving data;
The processor, for by the task in resource pool according to preset granularity division into multiple subtasks;The multiple subtask is added in shared queue;Trigger multiple cores to obtain subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
In the first possible embodiment, with reference to fourth aspect, the processor, the task in the resource pool includes user-level task and antenna level task, shellfish ' J:
The processor, specifically for by user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;The user class subtask is added into user class shared queue, the antenna level subtask is added into antenna level shared queue;Multiple cores are triggered to obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, user class subtask is obtained from user class shared queue successively by the multiple core again and handled, until the user class subtask in user class shared queue is all disposed.
In second of possible embodiment, with reference to fourth aspect, the processor, the processor is additionally operable to determine that the task in Current resource pond is antenna level task or user-level task;If antenna level task, then the task in the Current resource pond is directly distributed;If user-level task, then perform by the task in Current resource pond according to preset granularity division into multiple subtasks operation.
The embodiment of the present invention is used the task in resource pool according to preset granularity division into multiple subtasks, is then added these subtasks in shared queue, and is triggered multiple cores and obtained subtask from shared queue successively and handled.Due in this scenario, without carrying out load estimation, but by the multiple of processing task Core is according to the loading condition of itself, subtask is obtained from shared queue and is handled, therefore, it can avoid the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly the task in resource pool can be scheduled, the load of each core is balanced effectively in real time.Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, the accompanying drawing used required in being described below to embodiment is made cylinder and singly introduced, apparently, drawings in the following description are only some embodiments of the present invention, for those skilled in the art, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the flow chart that the embodiment of the present invention provides the dispatching method of resource pool in multiple nucleus system;Fig. 2 a are the schematic diagram of a scenario of distributed scheduling mode provided in an embodiment of the present invention;
Fig. 2 b are another flow charts that the embodiment of the present invention provides the dispatching method of resource pool in multiple nucleus system;Fig. 3 a are the schematic diagram of a scenario that the embodiment of the present invention provides the scheduling mode that distributed and centralization is combined;
Fig. 3 b are another flow charts that the embodiment of the present invention provides the dispatching method of resource pool in multiple nucleus system;Fig. 4 is the structural representation that the embodiment of the present invention provides the dispatching device of resource pool in multiple nucleus system;Fig. 5 is another structural representation that the embodiment of the present invention provides the dispatching device of resource pool in multiple nucleus system;
Fig. 6 is the structural representation of the communication equipment of multiple nucleus system provided in an embodiment of the present invention.Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those skilled in the art are obtained under the premise of creative work is not made belongs to the scope of protection of the invention.
The embodiment of the present invention provides the dispatching method of resource pool, device and system in a kind of multiple nucleus system.It is described in detail individually below.Embodiment one,
In this example, the angle of the dispatching device of resource pool from multiple nucleus system is described, the dispatching device of resource pool is specifically as follows the centralized dispatching module in multiple nucleus system in the multiple nucleus system. The dispatching method of resource pool in a kind of multiple nucleus system, including:The plurality of subtask is added in shared queue by the task in resource pool according to preset granularity division into multiple subtasks, multiple cores is triggered and obtains subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
As shown in figure 1, idiographic flow can be as follows:
101st, by the task in resource pool according to preset granularity division into multiple subtasks;
Wherein, the granularity of division can be configured according to the demand of practical application, such as, can specifically be divided according to the progress of the function of task, etc..
Wherein, task in resource pool can have polytype, such as, user-level task and antenna level task can be included, specifically when dividing subtask, can respectively be divided according to the type of task, such as, specifically can by user-level task according to preset granularity division into multiple user class subtasks, and by antenna level task according to preset granularity division into multiple antenna level subtasks, etc..
Optionally, in order to improve treatment effeciency, different processing can also be carried out according to the priority of task, for example, because the load required for antenna processing is larger, therefore, for the task such as antenna level task of high priority, it can be handled using centralized scheduling mode, and the load required for being handled due to user is smaller, therefore, for the task such as user-level task of low priority, it can then be handled using distributed scheduling mode, i.e., before step " by the task in resource pool according to preset granularity division into multiple subtasks " (i.e. step 101), the dispatching method of resource pool can also include in the multiple nucleus system:
It is antenna level task or user-level task to determine the task in Current resource pond, if antenna level task, then to the task in the Current resource pond(That is antenna level task)Directly distributed;If user-level task, then perform the task in Current resource pond(That is user-level task)The step of according to preset granularity division into multiple subtasks.
Wherein, the method directly distributed the task in resource pool specifically can be as follows:
The disposal ability information of each core is obtained, the load needed for each antenna processing is estimated according to the disposal ability information of each core, estimation load is obtained, is loaded according to obtained estimation by the task in the current resource pool(That is antenna level task)Distribute to each core.
In addition, in step " by the task in resource pool according to preset granularity division into multiple subtasks " before, other processing can also be carried out, such as, task can be handled by other prime processing modules, then just trigger the dispatching device such as centralized dispatching module and perform step 101, i.e. in step 101 (i.e. by the task in resource pool according to preset granularity division into multiple subtasks)Before, this method can also include: After it is determined that prime processing module handled task, receive the triggering of the prime processing module.Wherein, prime processing module is specifically as follows other cores, hardware accelerator or hardware module(Such as
Intellectual Property Core, i.e. IP modules, also referred to as IP kernel)Deng.
It should be noted that, in embodiments of the present invention, only illustrated by taking user-level task and antenna level task as an example, it should be appreciated that the task in resource pool can also include other types, and its implementation is identical with this, will not be repeated here.
102nd, the multiple subtasks divided in step 101 are added in shared queue.
For example, if in a step 101, by user-level task according to preset granularity division into multiple user class subtasks, and by antenna level task according to preset granularity division into multiple antenna level subtasks, then now, specifically user class subtask can be added user class shared queue, the antenna level subtask is added into antenna level shared queue.
Wherein, the shared queue(Such as user class shared queue or antenna level shared queue)Can be software shared queue or hardware shared queue.
103rd, trigger multiple cores to obtain subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
Obtain antenna level subtask for example, can specifically trigger multiple cores from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, then again by the plurality of core(The plurality of core can be triggered again, can also directly be performed by this multiple core without triggering)User class subtask is obtained from user class shared queue successively and handled, until the user class subtask in user class shared queue is all disposed.Such as, then specifically can be with so that multiple cores include core C and core D as an example:Under mouthful:
When core C and core D are handled, respectively since obtaining antenna level subtask in antenna level shared queue, for example, core C gets first antenna level subtask, the such as task of antenna 0, core D gets second antenna level subtask 1, such as the task of antenna 1.Because software processing loads difference, core D has first been handled, then, core D may proceed to obtain next antenna level subtask from antenna level shared queue(I.e. the 3rd antenna level subtask), such as the task of antenna 2, similarly, when such as fruit stone C has handled 0 task of antenna, core D is also in the processing task of antenna 2, then, core C will obtain next antenna level subtask from antenna level shared queue(I.e. the 4th antenna level subtask), i.e. the task of antenna 3, and as fruit stone D handle 2 task of antenna when, the also unfinished tasks of antenna 0 of core C are then obtained by core D from antenna level shared queue Next antenna level subtask(I.e. the 4th antenna level subtask), i.e. the task of antenna 3, by that analogy.That is, core C and core D can obtain antenna level subtask and be handled from antenna level shared queue successively, wherein, each core can just obtain next antenna level subtask after the antenna level subtask acquired in itself is completed.After the task in shared queue has been processed into, core C and core D just can not get subtask from shared queue, so, now indicate that the task in the resource pool is processed to finish, core C and core D then start the acquisition user class subtask from user class shared queue and handled, the process that it obtains user class subtask is similar with the process for obtaining antenna level subtask, will not be repeated here.
It should be noted that, if shared queue is software shared queue, need each core of software assurance not get same subtask, and if hardware shared queue, then need hardware to ensure that each core will not get same subtask.
In addition, it should be noted that be, if using and the task in resource pool be scheduled using distributed scheduling mode combination centralized scheduling mode, only user-level task is divided before this, and user class subtask is added in user class shared queue, then now each core only obtains user class subtask and handled from user class shared queue, and antenna level task is then directly distributed, and will not be repeated here.
From the foregoing, it will be observed that the present embodiment is used the task in resource pool according to preset granularity division into multiple subtasks, then these subtasks are added in shared queue, and trigger multiple cores and obtains subtask from shared queue successively and is handled.Due in this scenario, load estimation need not be carried out, but by multiple cores of processing task according to the loading condition of itself, subtask is obtained from shared queue and is handled, therefore, it can avoid the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly being scheduled the task in resource pool, the load of each core being balanced effectively in real time.Method according to described by embodiment one, will be described in further detail in implementation two and three kind of citing below.Embodiment two,
In the present embodiment, to be specially that the centralized dispatching module in multiple nucleus system, the task in resource pool include user-level task and antenna level task with the dispatching device of resource pool in multiple nucleus system, and adjusted using distributed referring to Fig. 2 a, the figure is the schematic diagram of a scenario of distributed scheduling mode, from Fig. 2 a, either use Family level task or antenna level task, are both needed to divide it, respectively obtain user class subtask and antenna level subtask, and are added separately in respective shared queue, i.e., user class subtask writes(Add)User class shared queue, the write-in of antenna level subtask(Add)Antenna level shared queue, then triggers multiple cores and obtains antenna level subtask from antenna level shared queue successively and handled, and the multiple cores of triggering obtain user class subtask and handled from user class shared queue successively.It will be described in more detail below.
As shown in Figure 2 b, idiographic flow can be as follows:
201st, after prime processing module has handled task, centralized dispatching module is triggered.
Wherein, prime processing module is specifically as follows other cores, hardware accelerator or Hardware I P etc..Such as, the prime processing module can be specifically core A and core B etc., i.e. core A and after core B handled task, trigger centralized dispatching module.
202nd, centralized dispatching module by user-level task according to preset granularity division into multiple user class subtasks, and by antenna level task according to preset granularity division into multiple antenna level subtasks.
Wherein, the granularity of division can be configured according to the demand of practical application, such as, can specifically be divided according to the progress of the function of task, etc..
203rd, user class subtask is added user class shared queue by centralized dispatching module, and the antenna level subtask is added into antenna level shared queue.
Wherein, the shared queue, such as user class shared queue or antenna level shared queue can be software shared queue or hardware shared queue.
204th, centralized dispatching module triggers multiple cores and obtains antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, then performing step 205.Such as, then specifically can be as follows so that multiple cores include core C and core D as an example:
When core C and core D are handled, respectively since obtaining antenna level subtask in antenna level shared queue, for example, core C gets first antenna level subtask, the such as task of antenna 0, core D gets second antenna level subtask 1, such as the task of antenna 1.Because software processing loads difference, core D has first been handled, then, core D may proceed to obtain next antenna level subtask from antenna level shared queue(I.e. the 3rd antenna level subtask), such as the task of antenna 2, similarly, when such as fruit stone C has handled 0 task of antenna, core D is also in the processing task of antenna 2, then, core C will obtain next antenna level subtask from antenna level shared queue(I.e. 4th antenna level subtask), i.e. the task of antenna 3, and as fruit stone D handle 2 task of antenna when, the also unfinished tasks of antenna 0 of core C then obtain next antenna level subtask by core D from antenna level shared queue(I.e. the 4th antenna level subtask), i.e. the task of antenna 3, by that analogy.That is, core C and core D can obtain antenna level subtask and be handled from antenna level shared queue successively, wherein, each core can just obtain next antenna level subtask after the antenna level subtask acquired in itself is completed.
After the task in shared queue has been processed into, core C and core D just can not get subtask from shared queue, so, now indicate that the task in the resource pool is processed to finish, then, core C and core D then start the acquisition user class subtask from user class shared queue and handled, that is, perform step 205.
205th, the plurality of core obtains user class subtask from user class shared queue successively and handled, until the user class subtask in user class shared queue is all disposed.Such as, then specifically can be as follows so that multiple cores include core C and core D as an example:
When core C and core D are handled, respectively since obtaining user class subtask in user class shared queue, for example, core C gets first user class subtask, the such as task of user 0, core D gets second user class subtask 1, such as the task of user 1.Because software processing loads difference, core D has first been handled, then, core D may proceed to obtain next user class subtask from user class shared queue(I.e. third party level subtask), such as the task of user 2, similarly, when such as fruit stone C has handled 0 task of user, core D is also in the processing task of user 2, then, core C will obtain next user class subtask from user class shared queue(I.e. the 4th user class subtask), i.e. user3Task, and when such as fruit stone D has handled 2 task of user, core C does not complete the task of user 0 also, then next user class subtask is obtained from user class shared queue by core D(I.e. the 4th user class subtask), i.e. the task of user 3, by that analogy.That is, core C and core D can obtain user class subtask and be handled from user class shared queue successively, wherein, each core can just obtain next user class subtask after the user class subtask acquired in itself is completed.
It should be noted that, in step 204 and 205, if shared queue(Including antenna level shared queue and user class shared queue) be software shared queue, then need each core of software assurance not get same subtask, and if hardware shared queue, then need hardware to ensure that each core will not get same subtask.
In addition, it should be noted that, the above is only illustrated by taking C cores and D cores as an example, it should be appreciated that can also include more cores, such as E cores, F cores and G cores etc., its concrete methods of realizing with it is above-mentioned It is identical, it will not be repeated here.
As from the foregoing, the present embodiment is used is divided into multiple subtasks by the antenna level task and user class task in resource pool according to preset granularity, then these subtasks are added in antenna level shared queue and user class shared queue respectively, and triggers multiple cores and obtained subtask from these shared queues successively and handled.Due in this scenario, load estimation need not be carried out, but by multiple cores of processing task according to the loading condition of itself, subtask is obtained from shared queue and is handled, therefore, it can avoid the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly being scheduled the task in resource pool, the load of each core being balanced effectively in real time.Embodiment three,
It is with implementing two identicals, in the present embodiment, it is specially still the centralized dispatching module in multiple nucleus system with the dispatching device of resource pool in multiple nucleus system, and the task in resource pool includes illustrating exemplified by user-level task and antenna level task, from unlike embodiment two, in the present embodiment, by using the distributed scheduling mode adjusted and combined)Exemplified by illustrate.
Referring to Fig. 3 a, the schematic diagram of a scenario for the scheduling mode that the figure is distributed and centralization is combined, by scheming
3a is understood, in the present embodiment, for antenna level task, the mode directly distributed is used to be scheduled, and for user-level task, then it is divided to obtain user class subtask, and be added in user class shared queue, then triggered multiple cores and obtain user class subtask from user class shared queue successively and handled.It will be described in more detail below.
As shown in Figure 3 b, idiographic flow can be as follows:
301st, after prime processing module has handled task, centralized dispatching module is triggered.
Wherein, prime processing module is specifically as follows other cores, hardware accelerator or Hardware I P equipment etc..Such as, the prime processing module can be specifically core A and core B etc., i.e. core A and after core B handled task, trigger centralized dispatching module.
302nd, centralized dispatching module determines that the task in Current resource pond is antenna level task or user-level task, if antenna level task, then performs step 303, otherwise, if user-level task, then performs step 304.
303rd, centralized dispatching module is directly distributed antenna level task, for example, specifically can be as follows: The disposal ability information of each core is obtained, and the load needed for each antenna processing is estimated, estimation load is obtained, is then loaded according to the disposal ability information of each core and obtained estimation by the task in the current resource pool(That is antenna level task)Distribute to each core.
Such as, so that multiple cores include core C and core D as an example, then centralized dispatching module can obtain core C disposal ability information and core D disposal ability information, and precomputation is carried out to the load needed for each antenna processing, then task distribution is carried out to core C and core D according to the result of calculation obtained by core C disposal ability information, core D disposal ability information and the load precomputation, such as the distributing to core C of the task is:The antenna 0-3 of antenna level task is handled, the distributing to core D of the task is:Antenna 4-7 of antenna level task, etc. is handled, hereafter, core C and core D just can be handled according to the distributing to oneself of the task respectively.
304th, centralized dispatching module by user-level task according to preset granularity division into multiple user class subtasks.
Wherein, the granularity of division can be configured according to the demand of practical application, such as, can specifically be divided according to the progress of the function of task, etc..
305th, user class subtask is added user class shared queue by centralized dispatching module.
Wherein, the user class shared queue can be software shared queue or hardware shared queue.
306th, after each core has handled antenna level task, centralized dispatching module triggers multiple cores(Or, can also be without being triggered by centralized dispatching module to this multiple core, but voluntarily performed by this multiple core)User class subtask is obtained from user class shared queue successively and handled, until the user class subtask in user class shared queue is all disposed.Such as, then specifically can be as follows so that multiple cores include core C and core D as an example:
Antenna level task has been carried out in core C and core D(The antenna level task distributed in such as step 303)When afterwards, centralized dispatching module triggering core C and core D obtains user class subtask from user class shared queue respectively, for example, core C gets first user class subtask, the such as task of user 0, core D gets second user class subtask 1, such as the task of user 1.Because software processing loads different, core D has first been handled, then, core D may proceed to obtain (i.e. third party grade subtask, next user class subtask from user class shared queue), such as the task of user 2, similarly, when such as fruit stone C has handled 0 task of user, core D is also in the processing task of user 2, then, core C will obtain next user class subtask from user class shared queue(I.e. the 4th user class subtask), i.e. the task of user 3, and as fruit stone D handle 2 task of user when, the also unfinished tasks of user 0 of core C are then shared by core D from user class Next user class subtask is obtained in queue(I.e. the 4th user class subtask), i.e. the task of user 3, by that analogy.That is, core C and core D can obtain user class subtask and be handled from user class shared queue successively, wherein, each core can just obtain next user class subtask after the user class subtask acquired in itself is completed.
It should be noted that, within step 306, if user class shared queue is software shared queue, each core of software assurance is needed not get same subtask, and if hardware shared queue, then need hardware to ensure that each core will not get same subtask.
In addition, it should be noted that, the above is only illustrated by taking C cores and D cores as an example, it should be appreciated that can also be including more cores, such as E cores, F cores and G cores etc., and its concrete methods of realizing is same as described above, will not be repeated here.
As from the foregoing, the present embodiment uses and is directly allocated the antenna level task in resource pool, and multiple subtasks then are divided into according to preset granularity to user class task, then these subtasks are added in user class shared queue respectively, and after multiple cores have handled antenna level task, trigger multiple cores and obtain user class subtask from user class shared queue successively and handled.Due in this scenario, when handling user-level task, load estimation need not be carried out, but by multiple cores of processing task according to the loading condition of itself, user class subtask is obtained from user class shared queue and is handled, therefore, it can make up when handling antenna level task the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly the task in resource pool can be scheduled, the load of each core is balanced effectively in real time.Example IV,
In order to preferably implement above method, the embodiment of the present invention also provides a kind of dispatching device of resource pool in multiple nucleus system, as shown in figure 4, the dispatching device of resource pool includes division unit 401, adding device 402 and trigger element 403 in the multiple nucleus system;
Division unit 401, for by the task in resource pool according to preset granularity division into multiple subtasks;
Wherein, the granularity of division can be configured according to the demand of practical application, such as, can specifically be divided according to the progress of the function of task, etc..
Adding device 402, multiple subtasks for division unit 401 to be obtained are added in shared queue;Trigger element 403, is obtained at subtask and progress from shared queue successively for triggering multiple cores Reason, until the subtask in shared queue is all disposed.
Wherein, task in resource pool can have polytype, such as, user-level task and antenna level task can be included, specifically when dividing subtask, can respectively be divided according to the type of task, such as, specifically can by user-level task according to preset granularity division into multiple user class subtasks, and by antenna level task according to preset granularity division into multiple antenna level subtasks, etc.;I.e.:
Division unit 401, specifically can be used for user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;
Then now, adding device 402, specifically can be used for user class subtask adding user class shared queue, antenna level subtask are added into antenna level shared queue.
Trigger element 403, it is specific can be used for triggering multiple cores obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, user class subtask is obtained from user class shared queue successively by the multiple core again and handled, until the user class subtask in user class shared queue is all disposed.
Wherein, the shared queue(Such as user class shared queue or antenna level shared queue)Can be software shared queue or hardware shared queue.
Optionally, in order to improve treatment effeciency, different processing can also be carried out according to the priority of task, for example, because the load required for antenna processing is larger, therefore, for the task such as antenna level task of high priority, it can be handled using centralized scheduling mode, and the load required for being handled due to user is smaller, therefore, for the task such as user-level task of low priority, it can then be handled using distributed scheduling mode, it is i.e. as shown in Figure 5, the dispatching device of resource pool can also include judging unit 404 and allocation unit 405 in the multiple nucleus system;
Judging unit 404, being determined in Current resource pond for task is antenna level task or user-level task;
Allocation unit 405, can be used for when the task during judging unit 404 determines Current resource pond is antenna level task, to the task in Current resource pond(That is antenna level task)Directly distributed;
Division unit 401, specifically for determining Current resource pond in judging unit in task when being user-level task, by the task in Current resource pond(That is user-level task)According to preset granularity division into multiple subtasks.
For example, wherein, allocation unit 405 can include obtaining subelement, estimation subelement and distribute sub single Member;
Obtain subelement, the disposal ability information for obtaining each core;
Subelement is estimated, for according to estimating the load needed for each antenna processing, obtaining estimation load;Subelement is distributed, the task in Current resource pond is distributed into each core for the estimation load obtained according to the disposal ability information and the estimation subelement that obtain each core that subelement is got.
In addition, in division unit 401 " by the task in resource pool according to preset granularity division into multiple subtasks " before, other processing can also be carried out, such as, task can be handled by other prime processing modules, then just triggering division unit 401 performs the operation of " by the task in resource pool according to preset granularity division into multiple subtasks ", i.e. the dispatching device of resource pool can also include receiving unit in the multiple nucleus system;
Receiving unit, for after it is determined that prime processing module handled task, receiving the triggering of the prime processing module.
Wherein, prime processing module is specifically as follows other cores, hardware accelerator or Hardware I P etc..When implementing, above unit can be realized as independent entity, it can also be combined, realized as same or several entities, such as, the dispatching device of resource pool is specifically as follows the centralized dispatching module in multiple nucleus system in the multiple nucleus system, referring to embodiment two and three.The specific implementation of above unit can be found in embodiment of the method above, will not be repeated here.
As from the foregoing, in the core system of the present embodiment the division unit 401 of the dispatching device of resource pool can by the task in resource pool according to preset granularity division into multiple subtasks, then these subtasks are added in shared queue by adding device 402, and multiple cores are triggered by trigger element 403 is obtained successively from shared queue and subtask and handled.Due in this scenario, load estimation need not be carried out, but by multiple cores of processing task according to the loading condition of itself, subtask is obtained from shared queue and is handled, therefore, it can avoid the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly being scheduled the task in resource pool, the load of each core being balanced effectively in real time.Embodiment five,
Accordingly, the embodiment of the present invention also provides a kind of communication system, it is characterized in that, include the dispatching device of resource pool in any multiple nucleus system provided in an embodiment of the present invention, wherein, the dispatching device of resource pool is specifically as follows the centralized dispatching module in multiple nucleus system in the multiple nucleus system, for example, specifically can be as follows:Centralized dispatching module, for the task in resource pool to be appointed according to preset granularity division into many height Business, the plurality of subtask is added in shared queue, is triggered multiple cores and is obtained subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
Wherein, the granularity of division can be configured according to the demand of practical application, such as, can specifically be divided according to the progress of the function of task, etc..
Wherein, task in resource pool can have polytype, such as, user-level task and antenna level task can be included, specifically when dividing subtask, can respectively be divided according to the type of task, such as, specifically can by user-level task according to preset granularity division into multiple user class subtasks, and by antenna level task according to preset granularity division into multiple antenna level subtasks, etc..Then now, user class subtask can specifically be added user class shared queue by centralized dispatching module, the antenna level subtask is added into antenna level shared queue, hereafter, multiple cores can be triggered to obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, then user class subtask is obtained from user class shared queue successively by the plurality of core again and handled, until the user class subtask in user class shared queue is all disposed.
Optionally, in order to improve treatment effeciency, different processing can also be carried out according to the priority of task, for example, because the load required for antenna processing is larger, therefore, for the task such as antenna level task of high priority, it can be handled using centralized scheduling mode, and the load required for being handled due to user is smaller, therefore, for the task such as user-level task of low priority, it can then be handled using distributed scheduling mode, i.e.,:
Centralized dispatching module, is additionally operable to determine that the task in Current resource pond is antenna level task or user-level task, if antenna level task, then to the task in the Current resource pond(That is antenna level task)Directly distributed;If user-level task, then perform the task in Current resource pond(That is user-level task)According to preset granularity division into multiple subtasks operation.
Wherein, the operation directly distributed the task in resource pool specifically can be as follows:
The disposal ability information of each core is obtained, and the load needed for each antenna processing is estimated, estimation load is obtained, is then loaded according to the disposal ability information of each core and obtained estimation by the task in the current resource pool(That is antenna level task)Distribute to each core.
In addition, the communication system can also include multiple cores, these cores can be used under the triggering of centralized dispatching module, subtask is obtained from shared queue successively and is handled, until the subtask in shared queue is disposed, embodiment above is for details, reference can be made to, be will not be repeated here. Further, the communication system can also include prime processing module, for having handled task after, triggering centralized dispatching module performs above-mentioned scheduling operation, will not be repeated here.
The specific implementation of each equipment can be found in embodiment of the method above above, will not be repeated here.As from the foregoing, centralized dispatching module in the communication system of the present embodiment is used the task in resource pool according to preset granularity division into multiple subtasks, then these subtasks are added in shared queue, and triggers multiple cores and obtained subtask from shared queue successively and handled.Due in this scenario, load estimation need not be carried out, but by multiple cores of processing task according to the loading condition of itself, subtask is obtained from shared queue and is handled, therefore, it can avoid the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly being scheduled the task in resource pool, the load of each core being balanced effectively in real time.Embodiment six,
A kind of communication equipment of multiple nucleus system, as shown in fig. 6, including processor 601, the memory 602 for data storage and program and the transceiver module 603 for transceiving data;Wherein:
Processor 601, for by the task in resource pool according to preset granularity division into multiple subtasks, the plurality of subtask is added in shared queue, multiple cores is triggered and obtains subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
Wherein, the granularity of division can be configured according to the demand of practical application, such as, can specifically be divided according to the progress of the function of task, etc..
Wherein, task in resource pool can have polytype, such as, user-level task and antenna level task can be included, specifically when dividing subtask, can respectively it be divided according to the type of task, such as, specifically can by user-level task according to preset granularity division into multiple user class subtasks, and by antenna level task according to preset granularity division into multiple antenna level subtasks, etc., i.e.,:
Processor 601, specifically can be used for user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;The user class subtask is added into user class shared queue, the antenna level subtask is added into antenna level shared queue;Multiple cores are triggered to obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, user class subtask is obtained from user class shared queue successively by the multiple core again and handled, until the user class subtask in user class shared queue is all disposed. It should be noted that, if shared queue is software shared queue, need each core of software assurance not get same subtask, and if hardware shared queue, then need hardware to ensure that each core will not get same subtask.
Optionally, in order to improve treatment effeciency, different processing can also be carried out according to the priority of task, for example, because the load required for antenna processing is larger, therefore, for the task such as antenna level task of high priority, it can be handled using centralized scheduling mode, and the load required for being handled due to user is smaller, therefore, for the task such as user-level task of low priority, it can then be handled using distributed scheduling mode, i.e. processor 601 is before the operation of " by the task in resource pool according to preset granularity division into multiple subtasks " is performed:
Processor 601, can be also used for determining that the task in Current resource pond is antenna level task or user-level task;If antenna level task, then the task in Current resource pond is directly distributed;If user-level task, then perform by the task in Current resource pond according to preset granularity division into multiple subtasks operation.
Wherein, the operation directly distributed the task in resource pool specifically can be as follows:
Obtain the disposal ability information of each core and the load information of antenna, load according to the load information of the disposal ability information of each core and antenna to each task is estimated, estimation load is obtained, is loaded according to obtained estimation by the task in the current resource pool(That is antenna level task)Distribute to each core.
The specific implementation of above various pieces can be found in embodiment above, will not be repeated here.
As from the foregoing, processor 601 in the communication equipment of the multiple nucleus system of the present embodiment can by the task in resource pool according to preset granularity division into multiple subtasks, then these subtasks are added in shared queue, and triggers multiple cores and obtained subtask from shared queue successively and handled.Due in this scenario, load estimation need not be carried out, but by multiple cores of processing task according to the loading condition of itself, subtask is obtained from shared queue and is handled, therefore, it can avoid the problem of the task of each core caused by load estimation is not accurate enough distributes uneven, more balancedly being scheduled the task in resource pool, the load of each core being balanced effectively in real time.One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment can be by program to instruct the hardware of correlation to complete, the program can be stored in a computer-readable recording medium, and storage medium can include:Read-only storage(ROM, Read Only Memory), random access memory(RAM, Random Access Memory), disk or CD etc.. The dispatching method of resource pool in a kind of multiple nucleus system provided above the embodiment of the present invention, device and illustrate, the explanation of above example is only intended to help to understand method and its core concept of the invention;Simultaneously for those skilled in the art, according to the thought of the present invention, it will change in specific embodiments and applications, in summary, this specification content should not be construed as limiting the invention.

Claims (15)

  1. Claim
    1st, in a kind of multiple nucleus system resource pool dispatching method, it is characterised in that including:
    By the task in resource pool according to preset granularity division into multiple subtasks;
    The multiple subtask is added in shared queue;
    Trigger multiple cores to obtain subtask from shared queue successively and handled, until the subtask in shared queue is all disposed.
    2nd, according to the method described in claim 1, it is characterised in that the task in the resource pool includes user-level task and antenna level task, then the task by resource pool includes according to preset granularity division into multiple subtasks:
    By user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;
    It is described to add the multiple subtask in shared queue specially:The user class subtask is added into user class shared queue, the antenna level subtask is added into antenna level shared queue.
    3rd, method according to claim 2, it is characterised in that the multiple cores of triggering obtain subtask and handled from shared queue successively, until the subtask in shared queue is all disposed, including:
    Multiple cores are triggered to obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, user class subtask is obtained from user class shared queue successively by the multiple core again and handled, until the user class subtask in user class shared queue is all disposed.
    4th, according to the method described in claim 1, it is characterised in that task in the resource pool includes user-level task and antenna level task, then the task by resource pool according to preset granularity division into before multiple subtasks, in addition to:
    It is antenna level task or user-level task to determine the task in Current resource pond;
    If antenna level task, then the task in the Current resource pond is directly distributed;If user-level task, then the step of performing the task in Current resource pond according to preset granularity division into multiple subtasks.
    5th, method according to claim 4, it is characterised in that the task in the Current resource pond is directly distributed, including: Obtain the disposal ability information of each core;
    Load needed for each antenna processing is estimated, estimation load is obtained;
    Task in the Current resource pond is distributed to by each core according to the disposal ability information of each core and estimation load.
    6th, the method according to any one of claim 1 to 5, it is characterised in that
    The shared queue is specially software shared queue or hardware shared queue.
    7th, in a kind of multiple nucleus system resource pool dispatching device, it is characterised in that including:
    Division unit, for by the task in resource pool according to preset granularity division into multiple subtasks;Adding device, multiple subtasks for the division unit to be obtained are added in shared queue;Trigger element, obtains for triggering multiple cores from shared queue and subtask and is handled, until the subtask in shared queue is all disposed successively.
    8th, in multiple nucleus system according to claim 7 resource pool dispatching device, it is characterised in that task in the resource pool includes user-level task and antenna level task, then:
    The division unit, specifically for by user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;
    The adding device, specifically for the user class subtask is added into user class shared queue, antenna level shared queue is added by the antenna level subtask.
    9th, in multiple nucleus system according to claim 8 resource pool dispatching device, it is characterized in that, the trigger element, antenna level subtask is obtained specifically for the multiple cores of triggering from antenna level shared queue successively and is handled, until after the antenna level subtask in antenna level shared queue is all disposed, user class subtask is obtained from user class shared queue successively by the multiple core again and handled, until the user class subtask in user class shared queue is all disposed.
    10th, in multiple nucleus system according to claim 7 resource pool dispatching device, it is characterized in that, task in the resource pool includes user-level task and antenna level task, then the dispatching device of resource pool also includes judging unit and allocation unit in the multiple nucleus system;
    Judging unit, for determining that the task in Current resource pond is antenna level task or user-level task;Allocation unit, for determining Current resource pond in judging unit in task when being antenna level task, the task in the Current resource pond is directly distributed;
    The division unit, specifically for determining Current resource pond in judging unit in task be that user is in charge of a grade During business, by the task in Current resource pond according to preset granularity division into multiple subtasks.
    11st, in multiple nucleus system according to claim 10 resource pool dispatching device, it is characterised in that the allocation unit include obtain subelement, estimation subelement and distribution subelement;
    Obtain subelement, the disposal ability information for obtaining each core;
    Subelement is estimated, for estimating the load needed for each antenna processing, estimation load is obtained;Subelement is distributed, the estimation for being obtained according to the disposal ability information and the estimation subelement for obtaining each core that subelement is got is loaded distributes to each core by the task in the Current resource pond.
    12nd, a kind of communication system, it is characterised in that include the dispatching device of resource pool in the multiple nucleus system described in any one of claim 7 to 11.
    13rd, a kind of communication equipment of multiple nucleus system, it is characterised in that the memory including processor, for data storage and program and the transceiver module for transceiving data;
    The processor, for by the task in resource pool according to preset granularity division into multiple subtasks;The multiple subtask is added in shared queue;Trigger multiple cores to obtain subtask from shared queue successively and handled, until the subtask in shared queue is disposed.
    14th, the communication equipment of multiple nucleus system according to claim 13, it is characterised in that the task in the resource pool includes user-level task and antenna level task, then:
    The processor, specifically for by user-level task according to preset granularity division into multiple user class subtasks, by antenna level task according to preset granularity division into multiple antenna level subtasks;The user class subtask is added into user class shared queue, the antenna level subtask is added into antenna level shared queue;Multiple cores are triggered to obtain antenna level subtask from antenna level shared queue successively and handled, until after the antenna level subtask in antenna level shared queue is all disposed, user class subtask is obtained from user class shared queue successively by the multiple core again and handled, until the user class subtask in user class shared queue is all disposed.
    15th, the communication equipment of multiple nucleus system according to claim 13, it is characterised in that the processor, is additionally operable to determine that the task in Current resource pond is antenna level task or user-level task;If antenna level task, then the task in the Current resource pond is directly distributed;If user-level task, then perform by the task in Current resource pond according to preset granularity division into multiple subtasks operation.
CN201380003199.7A 2013-09-29 2013-09-29 Method, apparatus and system for scheduling resource pool in multi-core system Pending CN105051689A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2013/084575 WO2015042904A1 (en) 2013-09-29 2013-09-29 Method, apparatus and system for scheduling resource pool in multi-core system

Publications (1)

Publication Number Publication Date
CN105051689A true CN105051689A (en) 2015-11-11

Family

ID=52741842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380003199.7A Pending CN105051689A (en) 2013-09-29 2013-09-29 Method, apparatus and system for scheduling resource pool in multi-core system

Country Status (2)

Country Link
CN (1) CN105051689A (en)
WO (1) WO2015042904A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115981819A (en) * 2022-12-30 2023-04-18 摩尔线程智能科技(北京)有限责任公司 Core scheduling method and device for multi-core system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214616B (en) 2017-06-29 2023-04-07 上海寒武纪信息科技有限公司 Information processing device, system and method
CN110413551B (en) 2018-04-28 2021-12-10 上海寒武纪信息科技有限公司 Information processing apparatus, method and device
WO2019001418A1 (en) 2017-06-26 2019-01-03 上海寒武纪信息科技有限公司 Data sharing system and data sharing method therefor
CN109426553A (en) * 2017-08-21 2019-03-05 上海寒武纪信息科技有限公司 Task cutting device and method, Task Processing Unit and method, multi-core processor
CN110502330A (en) * 2018-05-16 2019-11-26 上海寒武纪信息科技有限公司 Processor and processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101165655A (en) * 2006-10-20 2008-04-23 国际商业机器公司 Multiple processor computation system and its task distribution method
CN101169743A (en) * 2007-11-27 2008-04-30 南京大学 Method for implementing parallel power flow calculation based on multi-core computer in electric grid
CN101261591A (en) * 2008-04-28 2008-09-10 艾诺通信系统(苏州)有限责任公司 Multi- nuclear DSP system self-adapting task scheduling method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387952B (en) * 2008-09-24 2011-12-21 上海大学 Single-chip multi-processor task scheduling and managing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101165655A (en) * 2006-10-20 2008-04-23 国际商业机器公司 Multiple processor computation system and its task distribution method
CN101169743A (en) * 2007-11-27 2008-04-30 南京大学 Method for implementing parallel power flow calculation based on multi-core computer in electric grid
CN101261591A (en) * 2008-04-28 2008-09-10 艾诺通信系统(苏州)有限责任公司 Multi- nuclear DSP system self-adapting task scheduling method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115981819A (en) * 2022-12-30 2023-04-18 摩尔线程智能科技(北京)有限责任公司 Core scheduling method and device for multi-core system
CN115981819B (en) * 2022-12-30 2023-10-24 摩尔线程智能科技(北京)有限责任公司 Core scheduling method and device for multi-core system

Also Published As

Publication number Publication date
WO2015042904A1 (en) 2015-04-02

Similar Documents

Publication Publication Date Title
CN105051689A (en) Method, apparatus and system for scheduling resource pool in multi-core system
CN102822798B (en) Method and apparatus for the intrasystem resource capacity assessment of virtual container
CN104699542B (en) Task processing method and system
US11983564B2 (en) Scheduling of a plurality of graphic processing units
EP3253027A1 (en) Resource allocation method and apparatus for virtual machines
CN105979007A (en) Acceleration resource processing method and device and network function virtualization system
CN109564528A (en) The system and method for computational resource allocation in distributed computing
CN107562528A (en) Support the blocking on-demand computing method and relevant apparatus of a variety of Computational frames
CN103514046A (en) Virtual machine placement method and cluster management server
CN109634720A (en) A kind of multi-dummy machine shares the method, system and device of FPGA board
CN108427602B (en) Distributed computing task cooperative scheduling method and device
CN106325999A (en) Method and device for distributing resources of host machine
CN112114942A (en) Streaming data processing method based on many-core processor and computing device
Zhang et al. Data-aware task scheduling for all-to-all comparison problems in heterogeneous distributed systems
CN111813541B (en) Task scheduling method, device, medium and equipment
CN106325981A (en) Method and device for task scheduling
CN106062814A (en) Improved banked memory access efficiency by a graphics processor
CN107797870A (en) A kind of cloud computing data resource dispatching method
CN104765644A (en) Resource collaboration evolution system and method based on intelligent agent
CN115775199A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN107239328A (en) Method for allocating tasks and device
CN106933646A (en) A kind of method and device for creating virtual machine
CN116010093A (en) Data processing method, apparatus, computer device and readable storage medium
CN105630593A (en) Method for handling interrupts
Youssfi et al. Efficient load balancing algorithm for distributed systems using mobile agents

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151111

RJ01 Rejection of invention patent application after publication