CN103440173B - The dispatching method of a kind of polycaryon processor and relevant apparatus - Google Patents

The dispatching method of a kind of polycaryon processor and relevant apparatus Download PDF

Info

Publication number
CN103440173B
CN103440173B CN201310373371.XA CN201310373371A CN103440173B CN 103440173 B CN103440173 B CN 103440173B CN 201310373371 A CN201310373371 A CN 201310373371A CN 103440173 B CN103440173 B CN 103440173B
Authority
CN
China
Prior art keywords
processor
statistical analysis
snapshot
task
scheduled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310373371.XA
Other languages
Chinese (zh)
Other versions
CN103440173A (en
Inventor
胡欣蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310373371.XA priority Critical patent/CN103440173B/en
Publication of CN103440173A publication Critical patent/CN103440173A/en
Application granted granted Critical
Publication of CN103440173B publication Critical patent/CN103440173B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the invention discloses dispatching method and the relevant apparatus of a kind of polycaryon processor, distribute to the task of each processor for becoming more meticulous before scheduling.The method comprise the steps that and topological structure between processor and cache in system is generated snapshot;Scheduling strategy is generated according to the control information that described snapshot and user input;According to described snapshot, described scheduling strategy with treat that the attribute of scheduler task treats that scheduler task distributes processor resource described in being;Use treats that the processor resource of scheduler task treats scheduler task described in performing described in distributing to.

Description

Scheduling method and related device of multi-core processor
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a scheduling method for a multi-core processor and a related apparatus.
Background
The multi-core processor is a main trend of development of future processors, the number of cores integrated in a single physical processor is developed from an initial dual core to four cores, eight cores or even more, with continuous improvement of integration level, for a scheduler, a higher requirement is provided for the scheduler by a common multi-core, homogeneous multi-core and multi-core heterogeneous platform with different processing capabilities, and a scheduling scene faced by the scheduler tends to be complicated.
When a Linux System is started, a BIOS (Basic Input Output System) provides static information related to a processor, and based on the static information related to the processor, the Linux System divides a System structure into Scheduling domains (Scheduling domains) of 3 levels according to a relationship between HT (Hyper-Threading), kernel (core), SMP (symmetric multi-Processing), and NUMA (Non-Uniform Memory Access) nodes, each Scheduling Domain is a set of processors (CPU, Central Processing Unit) having the same attribute, as shown in fig. 1, the Scheduling domains of 3 levels are SMT (Simultaneous multi-Threading) domains, SMP domains, and NMUA domains, and the Scheduling domains of different levels are connected together through a multi-level chain, thereby forming a multi-level architecture. The existing architecture is mainly used for load balancing among a plurality of processors periodically, or load balancing among a plurality of processors after each execution is finished, and is also used for carrying out system call according to CPU affinity issued by a user space.
However, the load balancing performed among the multiple processors by the conventional scheduling method based on the multi-core processor belongs to the post adjustment behavior after the processors schedule the tasks, but the problem that the task distribution among the multiple processors before scheduling is unbalanced and not fine cannot be solved. And the related strategy of the CPU affinity can only be manually issued from the user space, and the scheduler can only schedule the tasks according to the CPU affinity appointed by the user space, but can not realize the intelligent scheduling of the scheduler.
Disclosure of Invention
The embodiment of the invention provides a scheduling method and a related device of a multi-core processor, which are used for solving the problems of unbalanced and imprecise task distribution among a plurality of processors before scheduling.
In order to solve the above technical problems, embodiments of the present invention provide the following technical solutions:
in a first aspect, an embodiment of the present invention provides a scheduling method for a multicore processor, including:
generating a snapshot of a topology between a processor and a cache in a system;
generating a scheduling strategy according to the snapshot and control information input by a user;
allocating processor resources for the task to be scheduled according to the snapshot, the scheduling strategy and the attribute of the task to be scheduled;
and executing the task to be scheduled by using the processor resource distributed to the task to be scheduled.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the generating a snapshot of a topology between a processor and a cache in a system includes:
periodically generating snapshots of a topological structure between a processor and a cache in the system; or,
generating a snapshot for a topological structure between a processor and a cache in the snapshot range in a system according to a preset snapshot range; or,
and periodically generating snapshots for the topological structure between the processor and the cache in the system according to the preset snapshot range.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, if a snapshot is periodically generated on a topology between a processor and a cache in a system, the executing the task to be scheduled by using the processor resource allocated to the task to be scheduled further includes:
judging whether the load of the system exceeds a load threshold according to the periodically generated snapshots;
if the load of the system exceeds a load threshold, allocating processor resources for the task to be scheduled again according to the periodically generated snapshot, the scheduling strategy and the attribute of the task to be scheduled;
and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
With reference to the first possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, if a snapshot is periodically generated on a topology between a processor and a cache in a system, the executing the task to be scheduled by using the processor resource allocated to the task to be scheduled further includes:
judging whether the attribute of the task to be scheduled changes;
if the attributes of the tasks to be scheduled change, allocating processor resources for the tasks to be scheduled again according to the periodically generated snapshots, the scheduling strategy and the changed attributes of the tasks to be scheduled;
and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
With reference to the first aspect or the first, second, and third possible implementation manners of the first aspect, in a fourth possible implementation manner of the first aspect, after the generating a snapshot of a topology between a processor and a cache in a system, the method further includes:
performing statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result;
the generating a scheduling policy according to the snapshot and the control information input by the user specifically includes:
and generating a scheduling strategy according to the snapshot, the statistical analysis result and the control information input by the user.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the executing the task to be scheduled by using the processor resource allocated to the task to be scheduled further includes:
judging whether errors based on the statistical analysis result occur according to the statistical analysis result;
and if the error based on the statistical analysis result occurs, migrating the task to be scheduled to a processor specified by the scheduling policy according to the scheduling policy.
With reference to the fourth or fifth possible implementation manner of the first aspect, in a sixth possible implementation manner of the first aspect, the performing, according to the snapshot, a statistical analysis on the attribute information of each processor to generate a statistical analysis result includes at least one of the following six implementation manners:
performing statistical analysis according to the idle state attributes of the processors to generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue comprising an idle state ordering queue for each processor;
performing statistical analysis according to the frequency attributes of the processors to generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue comprising frequency high and low ordering queues of the processors;
performing statistical analysis according to the load attributes of the processors to generate a third statistical analysis result, where the third statistical analysis result includes: a load queue comprising a load high-low ordering queue of the respective processors;
performing statistical analysis according to the cache error attribute of each processor to generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
performing statistical analysis according to the temperature attributes of the processors to generate a fifth statistical analysis result, where the fifth statistical analysis result includes: the temperature queues comprise temperature high-low ordering queues of the processors;
performing statistical analysis according to the queue attributes of the processors to generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
With reference to the fourth, fifth, and sixth possible implementation manners of the first aspect, in a seventh possible implementation manner of the first aspect, the statistical analysis results are sorted in a red-black tree RBtree or a binary heap.
With reference to the first aspect or the first, second, third, fourth, fifth, sixth, and seventh possible implementation manners of the first aspect, in an eighth possible implementation manner of the first aspect, the executing the task to be scheduled by using the processor resource allocated to the task to be scheduled further includes:
judging whether a processor in the system has a hardware fault in the aspect of software or hardware;
and if the processor in the system has a fault in the aspect of software or hardware, migrating the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy.
With reference to the first aspect or the first, second, third, fourth, fifth, sixth, seventh, and eighth possible implementation manners of the first aspect, in a ninth possible implementation manner of the first aspect, the snapshot includes at least one of the following information: the number of processors in the system, attribute information for each processor, attribute information for caches packaged within each processor, the number of cores included per processor, whether each core includes Simultaneous Multithreading (SMT) and the number of SMT's.
With reference to the first aspect or the first, second, third, fourth, fifth, sixth, seventh, eighth, and ninth possible implementations of the first aspect, in a tenth possible implementation of the first aspect, the snapshot further includes attribute information of a shared cache encapsulated in each processor.
A second aspect and an embodiment of the present invention provide a scheduling apparatus for a multicore processor, including:
the snapshot generating module is used for generating a snapshot for a topological structure between a processor and a cache in the system;
the acquisition module is used for generating a scheduling strategy according to the snapshot and control information input by a user;
the resource allocation module is used for allocating processor resources for the tasks to be scheduled according to the snapshots, the scheduling strategies and the attributes of the tasks to be scheduled;
and the scheduling module is used for executing the task to be scheduled by using the processor resource distributed to the task to be scheduled.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the snapshot generating module is specifically configured to generate snapshots for a topology between a processor and a cache in a system according to a periodicity; or, according to a preset snapshot range, generating a snapshot for the topological structure between the processor and the cache in the snapshot range in the system; or, periodically generating snapshots for the topology between the processor and the cache in the system according to a preset snapshot range.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the apparatus further includes: a determination module, wherein,
the judging module is used for judging whether the load of the system exceeds a load threshold according to the periodically generated snapshot if the periodically generated snapshot is generated on the topological structure between the processor and the cache in the system;
the resource allocation module is further configured to, when the load of the system exceeds a load threshold, reallocate processor resources to the task to be scheduled according to the periodically generated snapshot, the scheduling policy, and the attribute of the task to be scheduled;
the scheduling module is further configured to continue executing the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
With reference to the first possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the apparatus further includes: a determination module, wherein,
the judging module is used for judging whether the attribute of the task to be scheduled changes if a snapshot is generated periodically on a topological structure between a processor and a cache in a system;
the resource allocation module is further configured to, when the load of the system exceeds a load threshold, allocate processor resources to the task to be scheduled again according to the periodically generated snapshot, the scheduling policy, and the changed attribute of the task to be scheduled;
the scheduling module is further configured to continue executing the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
With reference to the second aspect or the first, second, and third possible implementation manners of the second aspect, in a fourth possible implementation manner of the second aspect, the apparatus further includes: a statistical analysis module, wherein,
the statistical analysis module is used for performing statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result;
the obtaining module is specifically configured to generate a scheduling policy according to the snapshot, the statistical analysis result, and control information input by a user.
With reference to the fourth possible implementation manner of the second aspect, in a fifth possible implementation manner of the second aspect, the apparatus further includes: a judging module and a scheduling migration module, wherein,
the judging module is used for judging whether errors based on the statistical analysis result occur according to the statistical analysis result;
and the scheduling migration module is further used for migrating the task to be scheduled to a processor specified by the scheduling policy according to the scheduling policy when an error based on a statistical analysis result occurs.
With reference to the fourth or fifth possible implementation manner of the second aspect, in a sixth possible implementation manner of the second aspect, the statistical analysis module includes at least one of the following six sub-modules:
a first statistical analysis sub-module, configured to perform statistical analysis according to the idle state attributes of the processors to generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue comprising an idle state ordering queue for each processor;
a second statistical analysis submodule, configured to perform statistical analysis according to the frequency attribute of each processor, and generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue comprising frequency high and low ordering queues of the processors;
a third statistical analysis submodule, configured to perform statistical analysis according to the load attribute of each processor, and generate a third statistical analysis result, where the third statistical analysis result includes: a load queue comprising a load high-low ordering queue of the respective processors;
a fourth statistical analysis submodule, configured to perform statistical analysis according to the cache error attribute of each processor, and generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
a fifth statistical analysis submodule, configured to perform statistical analysis according to the temperature attribute of each processor, and generate a fifth statistical analysis result, where the fifth statistical analysis result includes: the temperature queues comprise temperature high-low ordering queues of the processors;
a sixth statistical analysis submodule, configured to perform statistical analysis according to the queue attributes of the processors, and generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
With reference to the fourth, fifth, and sixth possible implementation manners of the second aspect, in a seventh possible implementation manner of the second aspect, the statistical analysis module is specifically configured to sort the statistical analysis results in a red-black tree RBtree or a binary heap.
With reference to the second aspect or the first, second, third, fourth, fifth, sixth, and seventh possible implementation manners of the second aspect, in an eighth possible implementation manner of the second aspect, the apparatus further includes: a judging module and a scheduling migration module, wherein,
the judging module is used for judging whether a processor in the system has a hardware fault in the aspect of software or hardware;
and the scheduling migration module is used for migrating the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy when the processor in the system has a fault in the aspect of software or hardware.
With reference to the second aspect or the first, second, third, fourth, fifth, sixth, seventh, and eighth possible implementation manners of the second aspect, in a ninth possible implementation manner of the second aspect, the snapshot generated by the snapshot generating module includes at least one of the following information: the number of processors in the system, attribute information for each processor, attribute information for caches packaged within each processor, the number of cores included per processor, whether each core includes Simultaneous Multithreading (SMT) and the number of SMT's.
With reference to the second aspect or the first, second, third, fourth, fifth, sixth, seventh, eighth, and ninth possible implementation manners of the second aspect, in a tenth possible implementation manner of the second aspect, the snapshot generating module is further configured to generate a snapshot that includes attribute information of a shared cache encapsulated in each processor.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, a snapshot is generated for a topological structure between a processor and a cache in a system, then a scheduling strategy is generated according to the snapshot and control information input by a user, processor resources are allocated for a task to be scheduled according to the snapshot, the scheduling strategy and attributes of the task to be scheduled, and finally the scheduling task is executed according to the processor resources allocated to the task to be scheduled. Because the snapshot is generated aiming at the topological structure between the processor and the cache in the system before the processor resources are allocated to the task to be scheduled, the processor resources are allocated based on the snapshot, the scheduling strategy and the attributes of the task to be scheduled, and the snapshot can describe the real condition of the system more accurately and more finely.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a schematic diagram illustrating an implementation of a Linux scheduling domain for load balancing provided in the prior art;
fig. 2 is a schematic flowchart of a scheduling method for a multicore processor according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a topology of a multicore processor in the system according to the embodiment of the present invention;
FIG. 4 is a diagram illustrating an implementation of generating a snapshot according to a topology of a processor and a cache and performing statistical analysis according to the snapshot according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a specific implementation process of a scheduling apparatus of a core processor according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a scheduling and control implementation applied to a wireless controller according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an implementation process of a task distribution mechanism based on snapshot and statistical analysis results according to an embodiment of the present invention;
fig. 8-a is a schematic structural diagram of a scheduling apparatus of a multicore processor according to an embodiment of the present invention;
fig. 8-b is a schematic diagram of a component structure of a scheduling apparatus of another multi-core processor according to an embodiment of the present invention;
fig. 8-c is a schematic structural diagram of a scheduling apparatus of another multi-core processor according to an embodiment of the present invention;
FIG. 8-d is a schematic diagram of a structure of a statistical analysis module according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of another scheduling apparatus for a multicore processor according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a scheduling method and a related device of a multi-core processor, which are used for refining tasks distributed to various processors before scheduling.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one skilled in the art from the embodiments given herein are intended to be within the scope of the invention.
One embodiment of the scheduling method of the multi-core processor of the present invention may include: generating a snapshot of a topology between a processor and a cache in a system; generating a scheduling strategy according to the snapshot and control information input by a user; allocating processor resources for the task to be scheduled according to the snapshot, the scheduling strategy and the attribute of the task to be scheduled; and executing the task to be scheduled by using the processor resource distributed to the task to be scheduled.
Referring to fig. 2, a method for scheduling a multi-core processor according to an embodiment of the present invention may include:
201. a snapshot is generated of the topology between the processors and the cache in the system.
In the embodiment of the invention, as for the topological structure between the processor and the cache in the system, firstly, the snapshot is made, and the topological structure information between the processor and the cache in the system can be recorded through the snapshot, so that a reference basis is provided for subsequent scheduling.
Specifically, in some embodiments of the present invention, the snapshot may include the number of processors in the system, attribute information of each processor, attribute information of a cache encapsulated in each processor, the number of cores included in each processor, whether each core includes Simultaneous Multithreading (SMT), and the number of the SMT.
In some embodiments of the invention, to enable use of a shared cache within a processor, the snapshot generated may further comprise: attribute information of the shared cache encapsulated within each processor.
In this embodiment of the present invention, the system refers to a system (system) including a processor with multiple COREs, where the system includes multiple Non-uniform memory Access (NUMA) nodes, each NUMA node includes multiple symmetric Multi-Processing (SMP), each SMP includes multiple sockets, each Socket includes multiple processors, each processor may be an isomorphic Multi-CORE or a heterogeneous Multi-CORE, as shown in fig. 3, a CPU1 is an X86 processor, a CPU1 includes four COREs, which are CORE #0, CORE #1, CORE #2, and CORE #3, and each CORE includes two SMT inside, which are SMT #0 and SMT #1, and each CORE corresponds to two-level caches: the L1cache (primary cache) and the L2cache (secondary cache) are also packaged in the CPU1, wherein the L3cache is called as shared cache, in addition, the CPU2 is an arm (advanced risc machines) processor, the CPU2 includes four COREs, which are CORE #0, CORE #1, CORE #2 and CORE #3, each CORE corresponds to a cache, which is the L1cache (primary cache), and the CPU1 is also packaged with the L2cache (shared cache). It can be known that, as shown in fig. 3, the number of cores included in each processor of the system, the SMT included in each core, the caches corresponding to each core, and the like are different, in the embodiment of the present invention, the actual condition of the system can be accurately and finely obtained by generating a snapshot on the topology between the processors and the caches in the system, so as to provide a true and reliable basis for making a scheduling policy and allocating processor resources, so that tasks are allocated among the processors more evenly, which is completely different from a "remedial policy" that is always implemented by load balancing afterwards in the prior art, and thus, task allocation among the processors can be more evenly and more finely performed before scheduling. Due to the complexity of the composition structure of the system, the complex composition structure of the system is not considered when the processor resources are allocated in the prior art, so the processor resources are not refined enough when the processor resources are allocated for the tasks.
In some embodiments of the invention, snapshots may be periodically generated for the topology between the processors and the cache in the system; or, generating a snapshot for the topological structure between the processor and the cache in the snapshot range in the system according to a preset snapshot range; or, the snapshot is generated according to the preset snapshot range and the topology between the processor and the cache in the system periodically. That is, the snapshot generation for the topology between the processor and the cache in the system can be implemented in three specific ways, i.e., the first is according to the preset snapshot range, the second is according to the preset time interval, and the third is according to the preset snapshot range and the preset time interval. For example, the time interval may be set to 1ms, and a snapshot may be generated every 1ms for a topology between a processor and a Cache in the system, so that if a CPU is isolated or a Cache is isolated or a hardware fault occurs in the CPU after the snapshot is generated for the topology between the processor and the Cache in the system at the current time, after the snapshot generated at the next time point, the topology in the previous and subsequent systems may be found to have changed by comparing the previous and subsequent snapshots, and thus the real condition of the system may be accurately and finely reflected by the snapshot. Wherein the snapshot range is used to indicate which devices in the system to generate snapshots, for example, the preset snapshot range is only CORE #2 in the acquisition CPU1, and then snapshots can be generated only for the topology between processors and cache in the CPU1 in the system, which enables stronger pertinence, and snapshots can be generated according to a preset snapshot range for one or a series of metrics that affect scheduling, which are of interest to a specific application, such as the size of processor load, which is of interest to the application, and then an appropriate processor can be selected for the size of each processor load when generating a scheduling policy and allocating processor resources, so as to ensure load balancing among the processors. It should be noted that the time interval for generating the snapshot and the range of the snapshot may be specifically input by the user through the man-machine interface, and stored in the user state library in a manner of control information for calling when generating the snapshot.
In other embodiments of the present invention, after generating a snapshot of a topology between a processor and a cache in a system, the attribute information of each processor may be statistically analyzed according to the snapshot to generate a statistical analysis result. That is to say, after the topological structure between the processors and the cache in the system is collected, the attribute information of each processor can be statistically analyzed according to the content of the snapshot to generate a statistical analysis result, and the statistical analysis result can also be used as one of the bases for generating the scheduling policy, so that the generated scheduling policy can be more accurate.
It should be noted that the statistical analysis results may be sorted in the form of a Red black tree (RB tree) or a binary heap (binary heap). The implementation of generating the statistical analysis result will be described in detail below, which may be specifically one of the following examples, or may be a combination of two or more of them, and is not limited herein.
The attribute information of the processor may specifically be an Idle state (Idle state) attribute of the processor, and perform statistical analysis according to the Idle state attribute of each processor to generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue, the idle state queue comprising an idle state ordering queue for each processor;
the attribute information of the processor may specifically be a Frequency (Frequency) attribute of the processor, and perform statistical analysis according to the Frequency attribute of each processor to generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue, wherein the frequency queue comprises a frequency high-low sequencing queue of each processor;
the attribute information of the processors may specifically be load (load) attributes of the processors, and perform statistical analysis according to the load attributes of the processors to generate a third statistical analysis result, where the third statistical analysis result includes: a load queue, wherein the load queue comprises a load high-low sequencing queue of each processor;
the attribute information of the processors may specifically be Cache error (Cache error) attributes of the processors, and perform statistical analysis according to the Cache error attributes of the processors to generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
the attribute information of the processor may specifically be a temperature (temperature) attribute of the processor, and perform statistical analysis according to the temperature attribute of each processor to generate a fifth statistical analysis result, where the fifth statistical analysis result includes: a temperature queue including a temperature high-low ordering queue of each processor;
the attribute information of the processor may specifically be a Queue (Queue) attribute of the processor, and perform statistical analysis according to the Queue attribute of each processor to generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
To illustrate the snapshot and the statistical analysis results described in the embodiments of the present invention in detail, the following description is made: taking an example of generating a snapshot for a topology structure between a CPU and a Cache in an SMP of an NUMA node, as shown in fig. 4, the content of the snapshot includes: the SMP includes socket1 and socket2, socket1 includes CPU1, socket2 includes CPU2, CPU1 and CPU2 both belong to one SMP, CPU1 includes two cores, i.e., core0 and core1, CPU2 includes two cores, i.e., core2 and core3, each core corresponds to caches L1 and L2, level one cache L1 is divided into L1D for storing data and L1I for storing instructions, each core also corresponds to a shared cache L3, and each core includes two SMT, i.e., SMT0 and SMT 1.
In fig. 4, the SMP is illustrated as including two CPUs, but the SMP is shown in a real case, and this is only an example. After the snapshot is generated, the attribute information of each processor may also be statistically analyzed according to the snapshot to generate a statistical analysis result, as shown in fig. 4, taking an example that the SMP includes seven CPUs (respectively, CPU0, CPU1, CPU2, CPU3, CPU4, CPU5, and CPU 6) to generate an idle state queue, a frequency queue, a load queue, a cache error queue, a temperature queue, and a task queue, where it is required to be noted that the level of a specific attribute of the CPU is sequentially indicated in each queue from left to right, for example, for the temperature queue, the highest temperature is CPU5, the next is CPU6, the lowest temperature is CPU3, and the ordering relationship of other queues is similar to this and will not be described again. And the statistical analysis results can be sorted by RB tree or binary heap, as shown in the upper right of fig. 4, a sorting manner is shown by taking a load queue as an example, so that the load of each CPU can be conveniently found.
202. And generating a scheduling strategy according to the snapshot and the control information input by the user.
In the embodiment of the invention, after the snapshot is generated, the scheduling policy can be generated according to the snapshot and the control information input by the user, the real condition of the system can be more accurately and more finely described through the snapshot, and the scheduling policy is generated by combining the snapshot and the control information input by the user, so that the real condition of the system can be more met, and the intelligent scheduling is realized. The control information input by the user may include multiple types, for example, the user may set a white list, and write a specific task ID (Identity) into the white list in a manner of control information, and may also write a designated CPU into the white list in a manner of control information, so that only a specific task appearing in the white list may be allocated to the designated CPU resource, and for example, the control information input by the user may be bound to a specific task and a specific processor resource, and only the bound processor resource may be allocated to a task satisfying the binding relationship, and in addition, the control information input by the user may also be a specific security policy, and the scheduling policy may be generated according to the control information input by the user and the generated snapshot. In the embodiment of the present invention, a scheduling policy needs to be flexibly generated according to specific control information input by a user and specific content of a generated snapshot, and the scheduling policy may also be generated in various ways, for example, the scheduling policy may be applied to a scheduling isolation mechanism based on a specific security policy and a resource occupation mechanism, and particularly, better support is provided for user-mode scheduling. For another example, the fault-tolerant scheduling policy may also be applied to a scheduling migration mechanism, which dynamically monitors and isolates a faulty entity and completes scheduling migration while providing processor resources required for scheduling optimization for a scheduler, thereby avoiding a critical task failure and simultaneously considering efficiency and reliability.
In some embodiments of the present invention, if the attribute information of each processor is statistically analyzed according to the snapshot to generate a statistical analysis result, the statistical analysis result may also be used as one of the bases for generating the scheduling policy when the scheduling policy is generated, that is, the scheduling policy may be generated according to the snapshot, the statistical analysis result, and the control information input by the user. For example, if the statistical analysis result specifically refers to a stable queue, the scheduling policy that may be generated is: and preferentially selecting the CPU with low temperature to execute the task to be scheduled each time.
In other embodiments of the present invention, the scheduling policy may also be to which processors the task to be scheduled is migrated when an error based on the result of the statistical analysis occurs or when hardware of the system fails, so as to avoid a task scheduling failure. The error based on the statistical analysis result refers to an error that can be obtained by generating the statistical analysis result, for example, when a cache error of the processor exceeds a preset threshold, it is determined that the error based on the statistical analysis result occurs, and for example, when the temperature of the processor exceeds a preset temperature value, it is determined that the error based on the statistical analysis result occurs.
It should be noted that some systems do not have a shared cache, and some systems have a shared cache, and if a shared cache is encapsulated in a processor in the system, such as L2 and L3 in fig. 3, the snapshot generated in step 201 may further include attribute information of the shared cache, so that the shared cache may be used when a scheduling policy is generated, and the generated scheduling policy may be to allocate the shared cache to a task to be scheduled.
203. And allocating processor resources for the task to be scheduled according to the snapshot, the scheduling strategy and the attribute of the task to be scheduled.
In the embodiment of the present invention, after the snapshot and the scheduling policy are generated, the attribute of the task to be scheduled needs to be considered, and for these pieces of information, processor resources may be allocated to the task to be scheduled, for example, if the attribute of the task to be scheduled satisfies a white list set in the scheduling policy, a processor capable of working normally may be found by combining the snapshot, the processor resource specified in the scheduling policy is preferentially allocated to the scheduling task, if it is found through the snapshot that the processor resource specified in the scheduling policy has been isolated or has a fault, the failure of the scheduling task is avoided, a processor capable of working normally is selected according to the snapshot, and the processor capable of working normally is allocated to the task to be scheduled, so as to avoid the scheduling failure of the key task.
In some embodiments of the present invention, when allocating processor resources to a task to be scheduled, the generated snapshot, the scheduling policy, and the attribute of the task to be scheduled may be used as an allocation basis in combination with the load of the system, so as to ensure that the system is not overloaded as much as possible.
204. And executing the task to be scheduled by using the processor resource distributed to the task to be scheduled.
After the processor resources are allocated to the task to be scheduled, the processor resources allocated to the task to be scheduled can be used for executing the task to be scheduled.
In some embodiments of the present invention, while executing the task to be scheduled by using the processor resource allocated to the task to be scheduled, the embodiments of the present invention may further include the following method: if the topology structure between the processor and the cache in the system is periodically generated with snapshots, whether the load of the system exceeds a load threshold can be judged according to the periodically generated snapshots; if the load of the system exceeds a load threshold, allocating processor resources for the task to be scheduled again according to the periodically generated snapshot, the scheduling strategy and the attribute of the task to be scheduled; and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled. That is, in the process of executing the task to be scheduled, snapshots of the system are continuously generated based on a time interval, for example, 1ms, whether the load of the system is too heavy is monitored in real time by the latest snapshot generated in time, and if the load of the system is too heavy, processor resources can be reallocated to the task to be scheduled, and the task to be scheduled can be continuously executed by using the reallocated processor resources.
In other embodiments of the present invention, while the processor resource allocated to the task to be scheduled is used to execute the task to be scheduled, the embodiments of the present invention may further include the following method: if a snapshot is generated on a topological structure between a processor and a cache in the system periodically, whether the attribute of the task to be scheduled changes can be judged; if the attributes of the tasks to be scheduled change, allocating processor resources for the tasks to be scheduled again according to the periodically generated snapshots, the scheduling strategy and the changed attributes of the tasks to be scheduled; and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled. That is, in the process of executing the task to be scheduled, the snapshot of the system may be continuously generated based on the time interval, for example, 1ms, and when the attribute of the task to be scheduled changes, the processor resource may be reallocated to the task to be scheduled by reallocating the latest snapshot generated in time, the scheduling policy, and the changed attribute of the task to be scheduled, so that the task to be scheduled may be continuously executed using the reallocated processor resource.
In other embodiments of the present invention, while the processor resource allocated to the task to be scheduled is used to execute the task to be scheduled, the embodiments of the present invention may further include the following method: whether errors based on the statistical analysis result occur can be judged according to the statistical analysis result; and if the error based on the statistical analysis result occurs, migrating the task to be scheduled to the processor specified by the scheduling strategy according to the scheduling strategy. That is, in the process of executing the task to be scheduled, snapshots of the system are continuously generated based on a time interval, for example, 1ms, whether an error based on the statistical analysis result occurs is judged according to the statistical analysis result by comparing the latest snapshots generated in time, and if an error based on the statistical analysis result occurs, the task to be scheduled can be migrated to a processor specified by the scheduling policy according to the scheduling policy, so that the failure of the task to be scheduled is avoided.
In other embodiments of the present invention, while the processor resource allocated to the task to be scheduled is used to execute the task to be scheduled, the embodiments of the present invention may further include the following method: whether the processor in the system has a fault in the aspect of software or hardware can also be judged; and if the processor in the system has a fault in the aspect of software or hardware, migrating the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy. That is, in the process of executing the task to be scheduled, snapshots of the system are continuously generated based on a time interval, for example, 1ms, whether a fault occurs in the hardware aspect of a processor in the system is judged according to a statistical analysis result by comparing the latest snapshots generated in time, if the fault occurs in the hardware aspect of the processor in the system, the task to be scheduled can be migrated to the processor specified by the scheduling policy according to the scheduling policy, so that the failure of the task to be scheduled is avoided, and if the fault occurs in the software aspect of the processor in the system, the task to be scheduled can be migrated to the processor specified by the scheduling policy according to the scheduling policy, so that the failure of the task to be scheduled is avoided.
It can be seen from the above embodiments that, a snapshot is first generated for a topology between a processor and a cache in a system, a scheduling policy is then generated according to the snapshot and control information input by a user, processor resources are allocated to a task to be scheduled according to the snapshot, the scheduling policy, and attributes of the task to be scheduled, and finally the scheduling task is executed according to the processor resources allocated to the task to be scheduled. Because the snapshot is generated aiming at the topological structure between the processor and the cache in the system before the processor resources are allocated to the task to be scheduled, the processor resources are allocated based on the snapshot, the scheduling strategy and the attributes of the task to be scheduled, and the snapshot can describe the real condition of the system more accurately and more finely.
To facilitate a better understanding and implementation of the above-described aspects of embodiments of the present invention, several application scenarios are described below. Fig. 5 shows a specific implementation of the scheduling apparatus of a multi-core processor in practical application, which may include: the system comprises a snapshot generating and analyzing module, a user management module, a processor resource allocation and control module, a scheduling control and migration module, a user mode scheduler, a task distribution module and a kernel mode scheduler. Wherein,
the snapshot generating and analyzing module generates a snapshot for a topological structure between a processor and a cache in the system, and analyzes the attribute information of the processor based on historical statistics to generate a statistical analysis result. Wherein generating snapshots for the system may reflect the topology between the processor and the cache based on controllable temporal granularity and/or depth granularity, may support different capability sets of multiple processors, and may enable topology awareness. According to different application scenarios and user requirements, the snapshot generating and analyzing module may be flexibly configured, for example, fig. 5 shows that the snapshot generating and analyzing module is disposed in a user space (Uesr space), and similarly, the snapshot generating and analyzing module may also be disposed in a Kernel (Kernel) of an operating system, where the snapshot generating and analyzing module disposed in the Kernel of the operating system is shown by a dotted line in fig. 5, which is described herein. In addition, the device can realize the related functions completely in the user mode when arranged in the user space, and can realize the content scheduling of the kernel-mode scheduler when arranged in the kernel of the operating system.
The snapshot generating and analyzing module mainly comprises the following two functions:
1) and generating a snapshot object: the method reflects the topological structure between a processor and a cache and collects real-time data, comprises the topological structures of the processor and the cache organized by a tree structure and relevant attributes (data), and can accurately generate snapshot objects for homogeneous and heterogeneous platforms. Still taking fig. 4 as an example, each snapshot creation point creates a tree structure representing the affinity between CPUs/CACHE, and leaf nodes represent the processors and associated attribute data available to each system and pointers to the run queues of the processors.
2) Taking fig. 4 as an example, after each snapshot is generated, the current and past snapshots are compared and analyzed to generate statistical analysis results based on different policies and attributes, the statistical analysis results may be sorted or processed by using different data structures (RB tree or binary heap) and stored in corresponding attribute queues, which is convenient for fast indexing, and different attributes may be superimposed to form a combination policy.
And the user management module is used for acquiring control information input by a user through the man-machine exchange interface, and indicating the time granularity of the snapshot generated by the snapshot generating and analyzing module, the processor attribute information interested by the specific program and the range granularity of the snapshot. And control information, scheduling strategy issuing, task distribution control and statistics and state monitoring are realized through a control and state and statistics and query interface.
The processor resource allocation and control module is used for generating a scheduling strategy according to the snapshot and control information input by a user; and allocating processor resources for the tasks to be scheduled according to the snapshot, the scheduling strategy and the attributes of the tasks to be scheduled.
And the scheduling control and migration module is used for automatically detecting the scheduling state and migrating the task to be scheduled aiming at the error based on the statistical analysis result or the hardware error. The active scheduling migration mechanism (resource dynamic allocation) based on the normal scheduling strategy can be adopted, and the scheduling migration mechanism based on the disaster tolerance scheduling strategy can be adopted, so that the long-time failure of the key task can be avoided. As shown in FIG. 5, the MCA (Machine Check Architecture) detects a core fault event or a fault event where the temperature of the processor exceeds a threshold. The scheduling control and migration module can combine snapshot and statistical analysis, can provide a scheduling isolation mechanism based on a specific security policy and a resource occupation mechanism, and particularly has better support for user-mode scheduling. The scheduling management and control module provides a scheduling migration mechanism based on snapshot and statistical analysis, particularly supports a fault-tolerant scheduling strategy, provides resources required by scheduling optimization for a scheduler, dynamically monitors and isolates a fault entity and completes scheduling migration, avoids failure of a key task, and considers efficiency and reliability.
The user-mode scheduler is configured to execute the task to be scheduled by using the processor resource allocated to the task to be scheduled, and if two CPUs are allocated to the task to be scheduled in fig. 5, the two CPUs can be directly used to execute the task to be scheduled. The user-mode scheduler can achieve balanced or statistical analysis-based job scheduling, selects tasks to be scheduled and distributes the tasks to a proper processor scheduling queue, and can have self-perception and dynamic setting capabilities of affinity without a user issuing a processor and cache affinity strategy, so that the user-mode scheduler can help to improve system performance.
And the task distribution module is used for distributing the tasks to be scheduled to the scheduling queues according to the snapshot and the statistical analysis result of the system.
And the kernel mode scheduler is used for executing the tasks to be scheduled in each scheduling queue by using the CPU resources. For example, the kernel-mode Scheduler may implement a certain degree of intelligent scheduling based on a statistical analysis result (e.g., single kernel or starting unloading (offloading) or automatic offline program when the local temperature is too high) of interest of a CFS (complete Fair scheduling algorithm), and may also be combined with the snapshot of the system.
Referring to fig. 6, taking the scheduling method of a multi-core processor provided in the embodiment of the present invention applied to a real-time system such as a wireless controller as an example, user-mode scheduling is implemented by binding processors, so as to achieve the purposes of matching a special scheduling policy without modifying cores and reducing overhead of system invocation.
Referring to fig. 6, the implementation flow in this application scenario includes the following steps:
and S01, circularly processing the control information input by the user. The setting of the control information of the user mode scheduling by the user management module may include: the number of the required CPUs, the number of the processes/threads, the use strategy of Cache resources, the division of the capability set and other supportable attributes.
S02, determine if a snapshot is generated for the system? If yes, go to step S03, otherwise go to step S04.
S03, acquiring the snapshot and the statistical analysis result, and then triggering the step S05 to execute.
S04, generating a snapshot for the topological structure between the processor and the cache in the system, if the generation is successful, executing the step S03, and if the generation of the snapshot is failed, returning to the step S01 again.
And S05, generating a scheduling strategy according to the snapshot and the control information input by the user, and allocating processor resources for the task to be scheduled according to the snapshot, the scheduling strategy and the attribute of the task to be scheduled.
And S06, binding the task to be scheduled and the allocated processor resource.
S07, using the processor resource allocated to the task to be scheduled to start executing the task to be scheduled, and then triggering the steps S08 and S15 to execute respectively.
S08, providing snapshot content in real time to monitor the load of the system and the task to be scheduled, judging whether the load of the system exceeds a load threshold, judging whether the attribute of the task to be scheduled changes, and executing steps S09 and S13 respectively.
And S09, judging whether the load of the system is too heavy, if so, triggering the execution of the step S10 and the step S11.
S10, checking snapshot content and dynamically expanding processor resources.
S11, determining whether to release some resources according to the snapshot content, and if so, executing step S12. For example, it may be dynamically adjusted according to the resources occupied by the task, such as the number of processors, whether the resource is exclusive, etc.
And S12, dynamically reducing the processor resources.
S13, judging whether the attribute of the task to be scheduled needs to be adjusted or whether the attribute of the task to be scheduled changes, if so, triggering the execution of the steps S11 and S14.
S14, judging whether the processor resource is dynamically adjusted, if yes, returning to the step S09.
S15, providing snapshot content in real time to monitor errors and hardware faults based on the statistical analysis result, and triggering the step S16 to execute.
S16, judging whether an error is found, if yes, triggering the execution of step S17 and step S18 respectively. Dynamic adjustment of processor resources is made based on snapshot content when errors are discovered, for example, by real-time monitoring.
S17, judging whether the cache error reaches the threshold value, if yes, executing the step S19.
S18, judging whether the CPU has fault, if yes, executing step S19.
And S19, performing task migration and rescheduling by means of snapshot contents provided in real time.
Fault-tolerant scheduling is carried out when certain types of errors (such as cache error) which are analyzed in a statistical mode reach a threshold value or hardware faults occur, fault resources are isolated, scheduling is migrated to a processor which is specified in advance according to snapshots, and therefore failure of key applications is avoided.
Referring to fig. 7, for an application scenario of the scheduling method of a multi-core processor in a task distribution mechanism, taking a snapshot generating and analyzing module disposed in a kernel of an operating system as an example, a flow of the implementation method may include the following steps:
1) based on a pre-configuration file or a user management module, issuing a task distribution strategy of kernel mode scheduling and parameters such as capacity set division and snapshot interval through a control and state query interface;
2) the snapshot generating and analyzing module (arranged in the kernel) divides different task distribution domains according to different capability sets or task requirements, and each domain comprises at least one processor scheduling queue;
3) and generating a final scheduling strategy according to the snapshot, the statistical analysis result and the control information input by the user and setting a task distribution module in the kernel.
4) The task distribution module adjusts the tasks to be scheduled to different processor scheduling queues in real time, and avoids performing load balancing after frequent scheduling;
5) the kernel-state scheduler can realize topology perception during specific process/thread scheduling through an interface with the snapshot generating and analyzing module, and can generate a scheduling strategy based on attributes (such as processor load, instant frequency, idle state, cache sharing, cache consistency, processor temperature and the like) concerned by the scheduler, and the scheduling strategy is used as a reference for scheduling, so that the kernel-state scheduling is more intelligent, the affinity of a CPU is not required to be manually issued through a user state, and the intelligent scheduling can be realized in the kernel-state scheduler.
It can be seen from the above that, a snapshot is first generated for a topology structure between a processor and a cache in a system, where the snapshot may include the number of processors in the system, attribute information of each processor, attribute information of the cache encapsulated in each processor, the number of cores included in each processor, and the number of each core including SMT and SMT, then a scheduling policy is generated according to the snapshot and control information input by a user, processor resources are allocated to a task to be scheduled according to the snapshot, the scheduling policy, and attributes of the task to be scheduled, and finally the scheduling task is executed according to the processor resources allocated to the task to be scheduled. Because the snapshot is generated aiming at the topological structure between the processor and the cache in the system before the processor resources are allocated to the task to be scheduled, the processor resources are allocated based on the snapshot, the scheduling strategy and the attributes of the task to be scheduled, and the snapshot can describe the real condition of the system more accurately and more finely.
To facilitate a better implementation of the above-described aspects of embodiments of the present invention, the following also provides relevant means for implementing the above-described aspects.
Referring to fig. 8-a, a scheduling apparatus 800 of a multi-core processor according to an embodiment of the present invention includes: a snapshot generating module 801, an obtaining module 802, a resource allocating module 803, a scheduling module 804, wherein,
a snapshot generating module 801, configured to generate a snapshot for a topology between a processor and a cache in a system;
an obtaining module 802, configured to generate a scheduling policy according to the snapshot obtained by the snapshot generating module 801 and control information input by a user;
a resource allocation module 803, configured to allocate processor resources to the task to be scheduled according to the snapshot acquired by the snapshot generation module 801, the scheduling policy generated by the acquisition module 802, and the attribute of the task to be scheduled;
a scheduling module 804, configured to execute the task to be scheduled by using the processor resource allocated to the task to be scheduled by the resource allocation module 803.
In some embodiments of the present invention, the snapshot generating module 801 is specifically configured to generate snapshots according to a periodic topology between a processor and a cache in a system; or, according to a preset snapshot range, generating a snapshot for the topological structure between the processor and the cache in the snapshot range in the system; or, periodically generating snapshots for the topology between the processor and the cache in the system according to a preset snapshot range.
Referring to fig. 8-b, in some embodiments of the present invention, a scheduling apparatus 800 of a multicore processor may further include: a decision block 805 is made, wherein,
the determining module 805 is configured to determine whether a load of the system exceeds a load threshold according to the periodically generated snapshot if the periodically generated snapshot is generated on a topology between a processor and a cache in the system;
the resource allocation module 803 is further configured to, when the load of the system exceeds a load threshold, allocate processor resources to the task to be scheduled again according to the periodically generated snapshot, the scheduling policy, and the attribute of the task to be scheduled;
the scheduling module 804 is further configured to continue executing the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
In some embodiments of the present invention, the determining module 805 is further configured to determine whether an attribute of the task to be scheduled changes if a snapshot is periodically generated on a topology between a processor and a cache in the system;
the resource allocation module 803 is further configured to, when the load of the system exceeds a load threshold, allocate processor resources to the task to be scheduled again according to the periodically generated snapshot, the scheduling policy, and the changed attribute of the task to be scheduled;
the scheduling module 804 is further configured to continue executing the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
Referring to fig. 8-c, in some embodiments of the present invention, a scheduling apparatus 800 of a multicore processor may further include: a statistical analysis module 806 that, among other things,
the statistical analysis module 806 is configured to perform statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result;
the obtaining module 802 is specifically configured to generate a scheduling policy according to the snapshot, the statistical analysis result, and control information input by a user.
Referring to fig. 8-c, in some embodiments of the present invention, a scheduling apparatus 800 of a multicore processor may further include, in addition to a statistical analysis module 806: a decision block 805 and a schedule migration block 807, wherein,
the judging module 805 is configured to judge whether an error based on the statistical analysis result occurs according to the statistical analysis result;
the scheduling migration module 807 is configured to, when an error based on a result of the statistical analysis occurs, migrate the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy.
In other embodiments of the present invention, referring to fig. 8-d, the statistical analysis module 806 may specifically include at least one of the following six sub-modules:
the first statistical analysis sub-module 8061 is configured to perform statistical analysis according to the idle state attribute of each processor, and generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue comprising an idle state ordering queue for each processor;
a second statistical analysis submodule 8062, configured to perform statistical analysis according to the frequency attribute of each processor, and generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue comprising frequency high and low ordering queues of the processors;
a third statistical analysis submodule 8063, configured to perform statistical analysis according to the load attribute of each processor, and generate a third statistical analysis result, where the third statistical analysis result includes: a load queue comprising a load high-low ordering queue of the respective processors;
a fourth statistical analysis submodule 8064, configured to perform statistical analysis according to the cache error attribute of each processor, and generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
a fifth statistical analysis submodule 8065, configured to perform statistical analysis according to the temperature attribute of each processor, and generate a fifth statistical analysis result, where the fifth statistical analysis result includes: the temperature queues comprise temperature high-low ordering queues of the processors;
a sixth statistical analysis submodule 8066, configured to perform statistical analysis according to the queue attribute of each processor, and generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
Wherein all sub-modules included in the statistical analysis module 806 are shown in fig. 8-d, it can be understood that in practical applications, all sub-modules need not be executed, and that it is flexible to decide which sub-modules to select according to a specific application scenario.
In other embodiments of the present invention, the statistical analysis module 806 is specifically configured to sort the statistical analysis results in a form of a red-black tree RBtree or a binary heap.
In other embodiments of the present invention, the scheduling apparatus 800 of a multicore processor includes a determining module 805 and a scheduling migration module 807, and is further configured to:
the judging module 805 is configured to judge whether a hardware fault occurs in a hardware or a software aspect of a processor in the system;
the scheduling migration module 807 is configured to, when a processor in the system fails in software or hardware, migrate the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy.
In other embodiments of the present invention, the snapshot generated by the snapshot generation module 801 includes at least one of the following information: the number of processors in the system, attribute information for each processor, attribute information for caches packaged within each processor, the number of cores included per processor, whether each core includes Simultaneous Multithreading (SMT) and the number of SMT's.
In other embodiments of the present invention, the snapshot generating module 801 is further configured to generate a snapshot including attribute information of a shared cache encapsulated in each processor.
It can be seen from the above that, a snapshot is first generated for the topology structure between the processor and the cache in the system, then a scheduling policy is generated according to the snapshot and the control information input by the user, a processor resource is allocated for the task to be scheduled according to the snapshot, the scheduling policy and the attribute of the task to be scheduled, and finally the scheduling task is executed according to the processor resource allocated to the task to be scheduled. Because the snapshot is generated aiming at the topological structure between the processor and the cache in the system before the processor resources are allocated to the task to be scheduled, the processor resources are allocated based on the snapshot, the scheduling strategy and the attributes of the task to be scheduled, and the snapshot can describe the real condition of the system more accurately and more finely.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores a program that executes some or all of the arrangements described in the above method embodiments.
Referring to fig. 9, a scheduling apparatus 900 of a multi-core processor according to another embodiment of the present invention includes:
an input device 901, an output device 902, a processor 903 and a memory 904 (wherein the number of the processors 903 in the scheduling device 900 may be one or more, and one processor is taken as an example in fig. 9). In some embodiments of the present invention, the input device 901, the output device 902, the processor 903 and the memory 904 may be connected by a bus or other means, wherein the connection by the bus is exemplified in fig. 9.
The processor 903 is configured to execute the following steps: generating a snapshot of a topology between a processor and a cache in a system; generating a scheduling strategy according to the snapshot and control information input by a user; allocating processor resources for the task to be scheduled according to the snapshot, the scheduling strategy and the attribute of the task to be scheduled; and executing the task to be scheduled by using the processor resource distributed to the task to be scheduled.
In some embodiments of the invention, the processor 903 is specifically configured to perform the following steps: periodically generating snapshots of a topological structure between a processor and a cache in the system; or,
generating a snapshot for a topological structure between a processor and a cache in the snapshot range in a system according to a preset snapshot range; or,
and periodically generating snapshots for the topological structure between the processor and the cache in the system according to the preset snapshot range.
In some embodiments of the invention, the processor 903 is further configured to perform the following steps: and performing statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result.
In other embodiments of the invention, the processor 903 is further configured to perform the following steps: if a snapshot is generated on a topological structure between a processor and a cache in the system periodically, judging whether the load of the system exceeds a load threshold according to the periodically generated snapshot; if the load of the system exceeds a load threshold, allocating processor resources for the task to be scheduled again according to the periodically generated snapshot, the scheduling strategy and the attribute of the task to be scheduled; and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
In some embodiments of the invention, the processor 903 is further configured to perform the following steps: if a snapshot is generated on a topological structure between a processor and a cache in the system periodically, judging whether the attribute of the task to be scheduled changes; if the attributes of the tasks to be scheduled change, allocating processor resources for the tasks to be scheduled again according to the periodically generated snapshots, the scheduling strategy and the changed attributes of the tasks to be scheduled; and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
In some embodiments of the invention, the processor 903 is further configured to perform the following steps: performing statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result; and generating a scheduling strategy according to the snapshot, the statistical analysis result and the control information input by the user.
In some embodiments of the invention, the processor 903 is further configured to perform the following steps: judging whether errors based on the statistical analysis result occur according to the statistical analysis result; and if the error based on the statistical analysis result occurs, migrating the task to be scheduled to a processor specified by the scheduling policy according to the scheduling policy.
In some embodiments of the invention, the processor 903 is specifically configured to perform the following steps: performing statistical analysis according to the idle state attributes of the processors to generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue comprising an idle state ordering queue for each processor;
performing statistical analysis according to the frequency attributes of the processors to generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue comprising frequency high and low ordering queues of the processors;
performing statistical analysis according to the load attributes of the processors to generate a third statistical analysis result, where the third statistical analysis result includes: a load queue comprising a load high-low ordering queue of the respective processors;
performing statistical analysis according to the cache error attribute of each processor to generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
performing statistical analysis according to the temperature attributes of the processors to generate a fifth statistical analysis result, where the fifth statistical analysis result includes: the temperature queues comprise temperature high-low ordering queues of the processors;
performing statistical analysis according to the queue attributes of the processors to generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
In some embodiments of the invention, the processor 903 is specifically configured to perform the following steps: and sequencing the statistical analysis results in a red-black tree RB tree or binary heap mode.
In some embodiments of the invention, the processor 903 is further configured to perform the following steps: judging whether a processor in the system has a hardware fault in the aspect of software or hardware;
and if the processor in the system has a fault in the aspect of software or hardware, migrating the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy.
In some embodiments of the invention, the processor 903 is specifically configured to perform the following steps: generating a snapshot of at least one of the following information: the number of processors in the system, attribute information for each processor, attribute information for caches packaged within each processor, the number of cores included per processor, whether each core includes Simultaneous Multithreading (SMT) and the number of SMT's.
In some embodiments of the invention, the processor 903 is further configured to perform the following steps: snapshots are generated of the attribute information of the shared cache encapsulated within the respective processors.
In summary, a snapshot is first generated for a topology structure between a processor and a cache in a system, then a scheduling policy is generated according to the snapshot and control information input by a user, a processor resource is allocated to a task to be scheduled according to the snapshot, the scheduling policy and attributes of the task to be scheduled, and finally the scheduling task is executed according to the processor resource allocated to the task to be scheduled. Because the snapshot is generated aiming at the topological structure between the processor and the cache in the system before the processor resources are allocated to the task to be scheduled, the processor resources are allocated based on the snapshot, the scheduling strategy and the attributes of the task to be scheduled, and the snapshot can describe the real condition of the system more accurately and more finely.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by hardware that is instructed to implement by a program, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
While the scheduling method and related apparatus for a multi-core processor provided by the present invention have been described in detail above, for those skilled in the art, there may be variations in the specific implementation and application scope according to the concepts of the embodiments of the present invention, and in summary, the content of the present description should not be construed as limiting the present invention.

Claims (20)

1. A scheduling method of a multi-core processor, comprising:
generating a snapshot of a topology between a processor and a cache in a system;
generating a scheduling strategy according to the snapshot and control information input by a user;
allocating processor resources for the task to be scheduled according to the snapshot, the scheduling strategy and the attribute of the task to be scheduled;
executing the task to be scheduled by using the processor resource distributed to the task to be scheduled;
after the snapshot is generated on the topology between the processor and the cache in the system, the method further comprises the following steps:
performing statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result;
the generating a scheduling policy according to the snapshot and the control information input by the user specifically includes:
and generating a scheduling strategy according to the snapshot, the statistical analysis result and the control information input by the user.
2. The method of claim 1, wherein the generating the snapshot of the topology between the processor and the cache in the system comprises:
periodically generating snapshots of a topological structure between a processor and a cache in the system; or,
generating a snapshot for a topological structure between a processor and a cache in the snapshot range in a system according to a preset snapshot range; or,
and periodically generating snapshots for the topological structure between the processor and the cache in the system according to the preset snapshot range.
3. The method according to claim 2, wherein if a snapshot is periodically generated on a topology between a processor and a cache in a system, the executing the task to be scheduled using the processor resource allocated to the task to be scheduled further comprises:
judging whether the load of the system exceeds a load threshold according to the periodically generated snapshots;
if the load of the system exceeds a load threshold, allocating processor resources for the task to be scheduled again according to the periodically generated snapshot, the scheduling strategy and the attribute of the task to be scheduled;
and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
4. The method according to claim 2, wherein if a snapshot is periodically generated on a topology between a processor and a cache in a system, the executing the task to be scheduled using the processor resource allocated to the task to be scheduled further comprises:
judging whether the attribute of the task to be scheduled changes;
if the attributes of the tasks to be scheduled change, allocating processor resources for the tasks to be scheduled again according to the periodically generated snapshots, the scheduling strategy and the changed attributes of the tasks to be scheduled;
and continuing to execute the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
5. The method of claim 1, wherein executing the task to be scheduled using the processor resources allocated to the task to be scheduled further comprises:
judging whether errors based on the statistical analysis result occur according to the statistical analysis result;
and if the error based on the statistical analysis result occurs, migrating the task to be scheduled to a processor specified by the scheduling policy according to the scheduling policy.
6. The method of claim 5, wherein the performing a statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result includes at least one of the following six implementation manners:
performing statistical analysis according to the idle state attributes of the processors to generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue comprising an idle state ordering queue for each processor;
performing statistical analysis according to the frequency attributes of the processors to generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue comprising frequency high and low ordering queues of the processors;
performing statistical analysis according to the load attributes of the processors to generate a third statistical analysis result, where the third statistical analysis result includes: a load queue comprising a load high-low ordering queue of the respective processors;
performing statistical analysis according to the cache error attribute of each processor to generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
performing statistical analysis according to the temperature attributes of the processors to generate a fifth statistical analysis result, where the fifth statistical analysis result includes: the temperature queues comprise temperature high-low ordering queues of the processors;
performing statistical analysis according to the queue attributes of the processors to generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
7. The method of claim 1, wherein the statistical analysis results are ordered in the form of a red-black tree RB tree or a binary heap.
8. The method of claim 1, wherein executing the task to be scheduled using the processor resources allocated to the task to be scheduled further comprises:
judging whether a processor in the system has a fault in the aspect of software or hardware;
and if the processor in the system has a fault in the aspect of software or hardware, migrating the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy.
9. The method according to any of claims 1 to 8, wherein the snapshot comprises information of at least one of: the number of processors in the system, attribute information for each processor, attribute information for caches packaged within each processor, the number of cores included per processor, whether each core includes Simultaneous Multithreading (SMT) and the number of SMT's.
10. The method of claim 9, wherein the snapshot further comprises attribute information of the shared cache encapsulated within each processor.
11. A scheduling apparatus of a multi-core processor, comprising:
the snapshot generating module is used for generating a snapshot for a topological structure between a processor and a cache in the system;
the acquisition module is used for generating a scheduling strategy according to the snapshot and control information input by a user;
the resource allocation module is used for allocating processor resources for the tasks to be scheduled according to the snapshots, the scheduling strategies and the attributes of the tasks to be scheduled;
the scheduling module is used for executing the tasks to be scheduled by using the processor resources distributed to the tasks to be scheduled;
the device further comprises: a statistical analysis module, wherein,
the statistical analysis module is used for performing statistical analysis on the attribute information of each processor according to the snapshot to generate a statistical analysis result;
the obtaining module is specifically configured to generate a scheduling policy according to the snapshot, the statistical analysis result, and control information input by a user.
12. The apparatus according to claim 11, wherein the snapshot generating module is specifically configured to generate the snapshot according to a periodic topology between the processor and the cache in the system; or, according to a preset snapshot range, generating a snapshot for the topological structure between the processor and the cache in the snapshot range in the system; or, periodically generating snapshots for the topology between the processor and the cache in the system according to a preset snapshot range.
13. The apparatus of claim 12, further comprising: a determination module, wherein,
the judging module is used for judging whether the load of the system exceeds a load threshold according to the periodically generated snapshot if the periodically generated snapshot is generated on the topological structure between the processor and the cache in the system;
the resource allocation module is further configured to, when the load of the system exceeds a load threshold, reallocate processor resources to the task to be scheduled according to the periodically generated snapshot, the scheduling policy, and the attribute of the task to be scheduled;
the scheduling module is further configured to continue executing the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
14. The apparatus of claim 12, further comprising: a determination module, wherein,
the judging module is used for judging whether the attribute of the task to be scheduled changes if a snapshot is generated periodically on a topological structure between a processor and a cache in a system;
the resource allocation module is further configured to, when the load of the system exceeds a load threshold, allocate processor resources to the task to be scheduled again according to the periodically generated snapshot, the scheduling policy, and the changed attribute of the task to be scheduled;
the scheduling module is further configured to continue executing the task to be scheduled by using the processor resource reallocated to the task to be scheduled.
15. The apparatus of claim 11, further comprising: a judging module and a scheduling migration module, wherein,
the judging module is used for judging whether errors based on the statistical analysis result occur according to the statistical analysis result;
and the scheduling migration module is used for migrating the task to be scheduled to a processor specified by the scheduling policy according to the scheduling policy when an error based on a statistical analysis result occurs.
16. The apparatus of claim 15, wherein the statistical analysis module comprises at least one of the following six sub-modules:
a first statistical analysis sub-module, configured to perform statistical analysis according to the idle state attributes of the processors to generate a first statistical analysis result, where the first statistical analysis result includes: an idle state queue comprising an idle state ordering queue for each processor;
a second statistical analysis submodule, configured to perform statistical analysis according to the frequency attribute of each processor, and generate a second statistical analysis result, where the second statistical analysis result includes: a frequency queue comprising frequency high and low ordering queues of the processors;
a third statistical analysis submodule, configured to perform statistical analysis according to the load attribute of each processor, and generate a third statistical analysis result, where the third statistical analysis result includes: a load queue comprising a load high-low ordering queue of the respective processors;
a fourth statistical analysis submodule, configured to perform statistical analysis according to the cache error attribute of each processor, and generate a fourth statistical analysis result, where the fourth statistical analysis result includes: a cache error queue including a cache error number sorting queue of each processor;
a fifth statistical analysis submodule, configured to perform statistical analysis according to the temperature attribute of each processor, and generate a fifth statistical analysis result, where the fifth statistical analysis result includes: the temperature queues comprise temperature high-low ordering queues of the processors;
a sixth statistical analysis submodule, configured to perform statistical analysis according to the queue attributes of the processors, and generate a sixth statistical analysis result, where the sixth statistical analysis result includes: and the task queues comprise task quantity sequencing or task priority high-low sequencing queues in the queues of the processors.
17. The apparatus according to claim 11, wherein the statistical analysis module is specifically configured to sort the statistical analysis results in a red-black tree RB tree or a binary heap.
18. The apparatus of claim 11, further comprising: a judging module and a scheduling migration module, wherein,
the judging module is used for judging whether the processor in the system has a fault in the aspect of software or hardware;
and the scheduling migration module is used for migrating the task to be scheduled to the processor specified by the scheduling policy according to the scheduling policy when the processor in the system has a fault in the aspect of software or hardware.
19. The apparatus according to any one of claims 11 to 18, wherein the snapshot generated by the snapshot generation module comprises at least one of the following information: the number of processors in the system, attribute information for each processor, attribute information for caches packaged within each processor, the number of cores included per processor, whether each core includes Simultaneous Multithreading (SMT) and the number of SMT's.
20. The apparatus of claim 19, wherein the snapshot generating module is further configured to generate the snapshot including the attribute information of the shared cache encapsulated in each processor.
CN201310373371.XA 2013-08-23 2013-08-23 The dispatching method of a kind of polycaryon processor and relevant apparatus Expired - Fee Related CN103440173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310373371.XA CN103440173B (en) 2013-08-23 2013-08-23 The dispatching method of a kind of polycaryon processor and relevant apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310373371.XA CN103440173B (en) 2013-08-23 2013-08-23 The dispatching method of a kind of polycaryon processor and relevant apparatus

Publications (2)

Publication Number Publication Date
CN103440173A CN103440173A (en) 2013-12-11
CN103440173B true CN103440173B (en) 2016-09-21

Family

ID=49693863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310373371.XA Expired - Fee Related CN103440173B (en) 2013-08-23 2013-08-23 The dispatching method of a kind of polycaryon processor and relevant apparatus

Country Status (1)

Country Link
CN (1) CN103440173B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096031A1 (en) * 2013-12-24 2015-07-02 华为技术有限公司 Method and apparatus for allocating thread shared resource
CN103744726B (en) * 2014-01-02 2017-01-04 西北工业大学 A kind of two-level scheduler method of Windows system real-time extension
CN104834565B (en) * 2014-02-12 2018-08-14 华为技术有限公司 A kind of system service dynamic deployment method and device
CN106502779B (en) * 2014-03-28 2019-07-02 哈尔滨工业大学 A kind of task immigration method of the load judgment method based on NoC multicore isomorphism system
CN104035823B (en) * 2014-06-17 2018-06-26 华为技术有限公司 Load-balancing method and device
CN104281495B (en) * 2014-10-13 2017-04-26 湖南农业大学 Method for task scheduling of shared cache of multi-core processor
CN104391747A (en) * 2014-11-18 2015-03-04 北京锐安科技有限公司 Parallel computation method and parallel computation system
WO2017017829A1 (en) * 2015-07-30 2017-02-02 三菱電機株式会社 Program execution device, program execution system, and program execution method
WO2017070900A1 (en) * 2015-10-29 2017-05-04 华为技术有限公司 Method and apparatus for processing task in a multi-core digital signal processing system
CN106793093B (en) * 2015-11-19 2019-12-06 大唐移动通信设备有限公司 Service processing method and device
CN106227606A (en) * 2016-07-28 2016-12-14 张升泽 The method and system of many interval distribution electronic chip voltages
WO2018032519A1 (en) * 2016-08-19 2018-02-22 华为技术有限公司 Resource allocation method and device, and numa system
CN106940657A (en) * 2017-02-20 2017-07-11 深圳市金立通信设备有限公司 A kind of method and terminal that task distribution is carried out to processor
CN109086125B (en) * 2017-06-14 2021-01-22 杭州海康威视数字技术股份有限公司 Picture analysis method, device and system, computer equipment and storage medium
CN108573014B (en) * 2017-12-19 2021-05-28 北京金山云网络技术有限公司 File synchronization method and device, electronic equipment and readable storage medium
CN109753593A (en) * 2018-12-29 2019-05-14 广州极飞科技有限公司 Spraying operation method for scheduling task and unmanned plane
CN110990139B (en) * 2019-12-06 2020-11-24 安徽芯智科技有限公司 SMP scheduling method and system based on RTOS
WO2021174466A1 (en) * 2020-03-04 2021-09-10 深圳市大疆创新科技有限公司 Self-adaptive load balancing method and system, and storage medium
CN111767148B (en) * 2020-06-29 2022-03-01 中国电子科技集团公司第五十四研究所 Embedded system resource management method based on multi-core DSP
CN113032145B (en) * 2021-03-18 2023-12-26 北京计算机技术及应用研究所 Task scheduling method based on domestic multi-NUMA node CPU junction temperature balancing strategy

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146089A (en) * 2007-08-22 2008-03-19 杭州华三通信技术有限公司 Method for configuring core resources in multi-core system, multi-core system and management core
CN101373444A (en) * 2007-03-30 2009-02-25 英特尔公司 Exposing system topology to the execution environment
CN101634953A (en) * 2008-07-22 2010-01-27 国际商业机器公司 Method and device for calculating search space, and method and system for self-adaptive thread scheduling
CN101840329A (en) * 2010-04-19 2010-09-22 浙江大学 Data parallel processing method based on graph topological structure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8667334B2 (en) * 2010-08-27 2014-03-04 Hewlett-Packard Development Company, L.P. Problem isolation in a virtual environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373444A (en) * 2007-03-30 2009-02-25 英特尔公司 Exposing system topology to the execution environment
CN101146089A (en) * 2007-08-22 2008-03-19 杭州华三通信技术有限公司 Method for configuring core resources in multi-core system, multi-core system and management core
CN101634953A (en) * 2008-07-22 2010-01-27 国际商业机器公司 Method and device for calculating search space, and method and system for self-adaptive thread scheduling
CN101840329A (en) * 2010-04-19 2010-09-22 浙江大学 Data parallel processing method based on graph topological structure

Also Published As

Publication number Publication date
CN103440173A (en) 2013-12-11

Similar Documents

Publication Publication Date Title
CN103440173B (en) The dispatching method of a kind of polycaryon processor and relevant apparatus
CN107431696B (en) Method and cloud management node for application automation deployment
CN110941481A (en) Resource scheduling method, device and system
CN109564528B (en) System and method for computing resource allocation in distributed computing
US11876731B2 (en) System and methods for sharing memory subsystem resources among datacenter applications
US20170083367A1 (en) System and method for resource management
CN102831015B (en) The dispatching method of polycaryon processor and equipment
JP2008191949A (en) Multi-core system, and method for distributing load of the same
CN109992366B (en) Task scheduling method and task scheduling device
KR20120066189A (en) Apparatus for dynamically self-adapting of software framework on many-core systems and method of the same
WO2016092856A1 (en) Information processing device, information processing system, task processing method, and storage medium for storing program
CN112860387A (en) Distributed task scheduling method and device, computer equipment and storage medium
EP4361808A1 (en) Resource scheduling method and device and computing node
CN114816709A (en) Task scheduling method, device, server and readable storage medium
US9459930B1 (en) Distributed complementary workload scheduling
US9158601B2 (en) Multithreaded event handling using partitioned event de-multiplexers
CN115033356A (en) Heterogeneous reconfigurable dynamic resource scheduling method and system
CN114721824A (en) Resource allocation method, medium and electronic device
KR20120083000A (en) Method for dynamically assigned of parallel control module
Liu et al. Mind the gap: Broken promises of CPU reservations in containerized multi-tenant clouds
Soualhia et al. ATLAS: An adaptive failure-aware scheduler for hadoop
US20120042322A1 (en) Hybrid Program Balancing
Markthub et al. Using rcuda to reduce gpu resource-assignment fragmentation caused by job scheduler
KR20130112180A (en) Method for scheduling of mobile multi-core virtualization system to guarantee real time process
CN113760485A (en) Scheduling method, device and equipment of timing task and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160921

Termination date: 20180823