WO2011103825A2 - 多处理器系统负载均衡的方法和装置 - Google Patents
多处理器系统负载均衡的方法和装置 Download PDFInfo
- Publication number
- WO2011103825A2 WO2011103825A2 PCT/CN2011/072913 CN2011072913W WO2011103825A2 WO 2011103825 A2 WO2011103825 A2 WO 2011103825A2 CN 2011072913 W CN2011072913 W CN 2011072913W WO 2011103825 A2 WO2011103825 A2 WO 2011103825A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- migration
- migration priority
- cpu
- priority
- local
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
Definitions
- the present invention relates to the field of resource allocation for multiprocessor systems, and more particularly to a method and apparatus for load balancing of a multiprocessor system.
- each CPU Central Processing Unit
- each CPU maintains a separate queue, which causes load imbalance between multiple CPUs, which requires the process (or thread) to be loaded from the load.
- the large CPU is migrated to the CPU with a small load to achieve load balancing.
- the CPU that the process migrates is called the local CPU, and the CPU that the process migrates into is called the target CPU.
- a method for migrating a process according to a process in a hot and cold state of a cache to achieve load balancing includes: first, setting a threshold (maximum value) of a waiting time of a process, where the waiting time of the process is Means whether to determine whether to migrate the process from the last execution end of the process; and then, determine whether the waiting time of a process on the local CPU is greater than the threshold; if yes, the process is considered to be cold in the cache, Process migration to target
- the above method can achieve the effect of load balancing, it is also possible to migrate some processes occupying a large memory space to the target CPU, so that when these processes are executed on the target CPU, the memory of the remote node needs to be accessed multiple times. Or it is necessary to copy a large amount of data related to the process from the memory of the remote node to the memory of the node, thereby affecting the performance of the system.
- Embodiments of the present invention provide a method and apparatus for load balancing of a multi-CPU system, where a process for migrating to a target CPU is implemented, and access to a remote node memory is reduced. The purpose of the number or copy of the data.
- a method for load balancing a multiprocessor system comprising:
- the migration priority is sorted; and the process with less memory space has a higher migration priority; the local CPU queue is in addition to the process being executed.
- the process with the highest migration priority is migrated to the target CPU.
- a device for load balancing a multiprocessor system comprising:
- a determining unit configured to determine a local central processing unit CPU and a target CPU in the multiprocessor system
- a sorting unit configured to perform migration priority sorting according to a size of a memory occupied by a process in the local CPU queue; and a process that occupies less memory space has a higher migration priority;
- the migration unit is configured to migrate the process with the highest migration priority in the local CPU queue except the process being executed to the target CPU.
- the method and device for load balancing of a multi-processor system provided by the embodiment of the present invention, according to the size of the memory occupied by the process in the local CPU queue, the migration priority is sorted, and the process in the local CPU queue except the process being executed is selected.
- the process with the highest migration priority migrates to the target CPU.
- the process with the smaller memory space has higher migration priority. Therefore, the process with the smallest memory space in the local CPU queue can be preferentially migrated to the target CPU. , thereby reducing the number of accesses to the remote node memory or the amount of data copying when the migrated process executes on the target CPU.
- FIG. 1 is a schematic diagram of a method for load balancing of a multiprocessor system according to Embodiment 1;
- FIG. 2 is a schematic diagram of a method for load balancing of a multiprocessor system according to Embodiment 2;
- FIG. 4 is a schematic diagram of another multiprocessor system load balancing method according to Embodiment 3;
- FIG. 5 is a multiprocessor system load balancing apparatus according to an embodiment of the present invention;
- FIG. 6 is a block diagram of another apparatus for load balancing of a multiprocessor system according to an embodiment of the present invention.
- a multiprocessor system is a system that has at least two CPUs for data processing.
- the NUMA (Non Uniform Memory Access Architecture) system is a multiprocessor system, and all CPUs in all nodes of the system can access all system physical memory, while the CPU of one node The latency for accessing other different node memories is different.
- the solution provided by the present invention can be applied to a multiprocessor system of the NUMA architecture.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- an embodiment of the present invention provides a method for load balancing a multi-processor system, including:
- Step 101 Determine a local central processing unit CPU and a target CPU in the multiprocessor system; for a multi-processor system, each time the clock interrupt detects whether there is a load imbalance in the multi-processor system, in the multi-processing
- the load balancing process refers to migrating a process from a CPU with a large load to a CPU with a small load; wherein, the CPU that the process migrates is called a local CPU, and the CPU that the process migrates into is called a target CPU;
- the CPU with a large load can be used as the local CPU, and the CPU with a small load can be used as the target CPU.
- Step 102 Perform, according to the size of the memory occupied by the process in the local CPU queue, perform migration priority sorting; and the process that occupies less memory space has a higher migration priority;
- the migration priority is used to prioritize process migration; that is, the higher the migration priority of a process, the more preferential the process is to be migrated.
- This step may be: reading the size of the memory space occupied by all the processes in the local CPU queue, and migrating the priority according to the size of the memory space occupied by each process; and the process occupying the less memory space, the migration priority is The higher.
- Step 103 Migrate a process in the local CPU queue that has the highest migration priority except the process being executed to the target CPU.
- the above three steps can be cycled until the multiprocessor system reaches load balancing or is close to load balancing or the multiprocessor system reaches the load of each CPU required by the application environment.
- each step is a multi-processor system load balancing device, and the device may be a function module based on a multi-processor system, and the function module may be in each CPU or independently. Set separately for each CPU.
- the multi-processor system load balancing method provided by the embodiment of the present invention migrates the process occupying the smallest memory space to the target in addition to the process being executed in the local CPU queue.
- Embodiment 2 The CPU can thereby reduce the number of accesses to the remote node memory or the amount of data copying when the migrated process executes on the target CPU.
- the load balancing method of the multi-processor system sets a threshold for the migration priority in advance according to the load requirement of the multi-processor system in the application environment or the scenario.
- a method for load balancing a multiprocessor system includes:
- Step 201 Determine a local CPU and a target CPU in the multiprocessor system
- Step 202 Perform, according to the size of the memory occupied by the process in the local CPU queue, perform migration priority sorting; and the process that occupies less memory space has a higher migration priority;
- the migration priority may be represented by a number, that is, the order of the migration priorities is represented by the size of the number; in this embodiment, the larger the number, the higher the migration priority. For example, if there are 9 processes in the local CPU queue, the migration priority of the 9 processes can be represented by 1 ⁇ 9 according to the size of the memory occupied by the 9 processes, and the process with the migration priority of 9 is the occupied memory. The smallest process in space.
- the order of the migration priorities of the two processes may be reversed; for example, if the process with the highest migration priority occupies the same size of the memory space, then the two processes
- the migration priority of the process is recorded as 8 and 9, respectively, but the migration priority of which of the two processes is specifically recorded as 8, and the migration priority of which process is recorded as 9, which can be set randomly, of course, Other judgments can be used to further distinguish the migration priorities of the two processes.
- Step 203 Compare, in the local CPU queue, the migration priority of the process with the highest migration priority and the preset threshold in addition to the currently executing process;
- a threshold is preset for the migration priority of the process in the local CPU queue, that is, the maximum value of the migration priority of the migration process is set.
- the migration priority of the process being executed in the local CPU queue is 5 and the preset threshold is 7. According to this step, the migration priority 9 and the preset threshold 7 of the process with the highest migration priority are compared. Obviously, If the process is greater than 7, and the migration priority is 9, the process is not being executed, so step 204 is performed.
- Step 204 Move the local CPU queue to the target CPU in addition to the process being executed.
- the process with the highest migration priority in the local CPU queue is 9, and the process is not the process being executed, and 9 is greater than the preset threshold 7, the process is migrated to the target CPU through this step; If the process with the migration priority of 9 in the local CPU queue is being executed, the process with the highest migration priority is the process with the migration priority of 8, except that the process is greater than the preset threshold of 7.
- the migration priority process is migrated to the target CPU. After the process of migrating the priority of the local CPU to the target CPU is migrated to the target CPU, it is determined whether the multiprocessing system is load balanced, and if the load is balanced, the process ends. If there is no load balancing, the process proceeds to steps 201 to 204.
- the method provided by the embodiment of the present invention sets a threshold for the migration priority according to the application environment or scenario of the multi-processing system, so that the multi-processing system can satisfy the load requirements of each CPU in the application environment or the scenario. It is possible to achieve load balancing; since the migrated processes are processes that occupy less memory space, the number of accesses to the remote node memory or the amount of data copyed by the migrated process when executing on the target CPU can be reduced.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- an embodiment of the present invention provides another method for load balancing of a multi-processor system.
- Step 301 Determine a local central processing unit CPU and a target CPU in the multiprocessor system; refer to step 101.
- Step 302 Calculate an execution time of each process in the local CPU queue.
- the process execution time is subtracted from the process execution start time to obtain the execution time of the process, or the timer execution method is used to calculate the execution time of the 9 processes.
- Step 303 Compare the execution time of each process, and perform the sorting of the migration priorities; and the shorter the execution time, the higher the migration priority;
- the size of the number of digits can be used to indicate the order of migration priorities. For example, this step can be used to compare the execution time of the nine processes in the local CPU queue, and use the numbers 1 ⁇ 9 to indicate the migration priority of the nine processes, that is, to sort the migration priorities. The shorter the execution time, the higher the migration priority; the migration priority of the process with the shortest execution time is 9 and the migration priority of the process with the longest execution time is 1.
- step 306 is continued.
- the execution times of the three processes with the migration priorities of 5, 6, and 7 are the same, and the migration priorities of the three processes may be randomly recorded. For 5, 6, and 7.
- the priority of the at least two processes with the same execution time can be further determined by steps 304-305.
- Step 304 If there are at least two processes with the same execution time in the local CPU queue, calculate a waiting time of the at least two processes with the same execution time;
- the waiting time of the process refers to the elapsed time from the time when the process ends in the last execution of the local CPU to the time when the step is performed; the calculation of the process waiting time can be subtracted from the time when the step is performed locally. At the end of the last execution in the CPU
- the method, or the method of using the timer does not rule out the use of other methods that can get the process wait time.
- the waiting time of the three is calculated separately.
- Step 305 Compare the waiting time of the at least two processes with the same execution time, and perform the sorting of the migration priorities; and the longer the waiting time, the higher the migration priority.
- step 3061 is performed, as shown in FIG. 3; if the load requirement of the multi-processor system in the application environment or scenario is considered, the migration priority needs to be set according to the load requirement. Threshold, proceed to step 3062, see Figure 4.
- Step 3061 Migrate the process in the local CPU queue except the process being executed and having the highest migration priority to the target CPU.
- the steps 301 to 3061 shown in Fig. 3 are cycled until the load is equalized. Size, and the process with the shortest execution time of the local CPU except the one being executed is migrated to the target CPU until the multiprocessor system is load balanced, thereby reducing the remote node memory when the migrated process is executed on the target CPU. The number of visits or the amount of data copied.
- Step 3062 Performing a migration priority in the local CPU queue except the process being executed.
- the migration priority of the highest process is compared with a preset threshold, and if the migration priority of the process is greater than the preset threshold, it is migrated to the target CPU.
- the process ends; if the multi-processor system load is still unbalanced at this time, the steps 301 to 3062 shown in FIG. 4 are cycled to make the multi-processor system satisfy the application environment or the scenario. Under the conditions, load balancing is achieved as much as possible.
- the embodiment of the present invention considers that the load balancing of the multi-processor system is achieved as much as possible in the application environment or scenario of the multi-processor system; since the migrated processes are processes that occupy less memory space, the migration can be reduced. The number of accesses to the remote node's memory or the amount of data copied when the process executes on the target CPU.
- the embodiment of the present invention further provides a device corresponding to the method for load balancing of the multiprocessor system, as shown in FIG. 5, the device includes:
- a determining unit 51 configured to determine a local central processing unit CPU and a target CPU in the multiprocessor system
- the sorting unit 52 is configured to perform the sorting of the migration priority according to the size of the memory occupied by the process in the local CPU queue; and the process with less memory space has a higher migration priority;
- the migration unit 53 is configured to migrate the process with the highest migration priority in the local CPU queue except the process being executed to the target CPU.
- the device for load balancing of the multi-processor system migrates the process occupying the smallest memory space to the target CPU in addition to the process being executed in the local CPU queue, thereby reducing the process of the migrated process executing on the target CPU.
- Time to remote node storage The number of accesses or the amount of data copied.
- the sorting unit 52 includes:
- a first calculation sub-unit configured to calculate an execution time of each process in the local CPU queue
- a first comparison sub-unit configured to compare the execution time of each process, perform migration priority ordering; and the process with shorter execution time , the higher the migration priority.
- the sorting unit 52 may further include:
- a second calculating subunit configured to calculate a waiting time of the at least two processes with the same execution time in the case that there are at least two processes with the same execution time in the local CPU queue; and a second comparison subunit for comparing The waiting time of the at least two processes with the same execution time is performed, and the migration priority is sorted; and the longer the waiting time, the higher the migration priority.
- the above apparatus further includes:
- the comparing unit 54 is configured to compare, in the local CPU queue, the migration priority of the process with the highest migration priority and the preset threshold, except for the process being executed, where the migration priority is represented by a number;
- the migration unit 53 is specifically configured to migrate the process with the highest migration priority in the local CPU queue except the process being executed, and migrate to the target CPU if the migration priority of the process is greater than the preset threshold.
- the multiprocessor system can achieve load balancing as much as possible while satisfying the load requirements of certain application environments or scenarios; since the migrated processes are processes that occupy less memory space. Thus, the number of accesses to the remote node memory or the amount of data copying when the migrated process is executed on the target CPU can be reduced.
- the present invention can be implemented by means of software plus necessary general hardware, and of course, by hardware, but in many cases, the former is a better implementation. .
- the technical solution of the present invention may be in the form of a software product in essence or in part contributing to the prior art.
- the computer software product is stored in a readable storage medium, such as a floppy disk, a hard disk or an optical disk of a computer, and includes a plurality of instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.)
- a computer device which may be a personal computer, a server, a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
一种多处理器系统负载均衡的方法和装置涉及多处理器系统的资源分配领域,用以实现迁移到目标中央处理器(CPU)的进程在执行时,减少对远端节点存储器的访问次数或数据的拷贝量的目的。所述多处理器系统负载均衡的方法包括:确定多处理器系统中的本地CPU和目标CPU;根据所述本地CPU队列中的进程占用存储器空间的大小,进行迁移优先级的排序,其中占用存储器空间越少的进程,其迁移优先级就越高;将本地CPU队列中除了正在执行的进程外、迁移优先级最高的进程迁移到所述目标CPU。本发明提供的方案可以适用于非一致存储器访问(NUMA)架构的多处理系统。
Description
多处理器系统负载均衡的方法和装置 技术领域
本发明涉及多处理器系统的资源分配领域,尤其涉及一种多处理器系 统负载均衡的方法和装置。
背景技术
在多处理器系统中,每个 CPU ( Central Processing Unit, 中央处理器) 各自单独维护一个队列, 从而会导致多个 CPU之间的负载不均衡, 这就 需要将进程(或线程)从负载量大的 CPU迁移到负载量小的 CPU上, 以 达到负载均衡; 其中进程迁移出来的 CPU称为本地 CPU, 进程迁移进去 的 CPU称为目标 CPU。
现有技术中提出了一种根据进程在緩存的冷热情况迁移进程,以达到 负载均衡的方法,具体包括:首先,设定进程的等待时间的阈值(最大值), 所谓进程的等待时间是指判断是否迁移该进程的时刻距离该进程最近一 次执行结束的时间; 然后, 判断本地 CPU上的一进程的等待时间是否大 于该阈值; 若是, 则认为该进程在緩存中是冷的, 将该进程迁移到目标
CPU 上, 若否, 则认为该进程在緩存中是热的, 不迁移该进程。 在各个 C P U上循环利用上述方法迁移进程, 直到各 C P U之间达到一定的负载均 衡。
利用上述方法虽然可以达到负载均衡的效果,但也可能将一些占用存 储器空间较大的进程迁移到目标 CPU中, 从而使得这些进程在目标 CPU 上执行时, 需要多次访问远端节点的存储器, 或者需要从远端节点的存储 器中将该进程相关的大量数据拷贝到本节点的存储器中,进而影响系统的 性能。
发明内容
本发明的实施例提供一种多 CPU 系统负载均衡的方法和装置, 用以 实现迁移到目标 CPU的进程在执行时, 减少对远端节点存储器的访问次
数或数据的拷贝量的目的。
为达到上述目的, 本发明的实施例采用如下技术方案:
一种多处理器系统负载均衡的方法, 包括:
确定多处理器系统中的本地中央处理器 CPU和目标 CPU;
根据所述本地 CPU队列中的进程占用存储器空间的大小, 进行迁移 优先级的排序; 且占用存储器空间越少的进程, 其迁移优先级就越高; 将本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的进程 迁移到所述目标 CPU。
一种多处理器系统负载均衡的装置, 包括:
确定单元, 用于确定多处理器系统中的本地中央处理器 CPU和目标 CPU;
排序单元, 用于根据所述本地 CPU队列中的进程占用存储器空间的 大小, 进行迁移优先级的排序; 且占用存储器空间越少的进程, 其迁移优 先级就越高;
迁移单元, 用于将本地 CPU队列中除了正在执行的进程外、 迁移优 先级最高的进程迁移到所述目标 CPU。
本发明实施例提供的一种多处理器系统负载均衡的方法和装置,根据 本地 CPU队列中的进程占用存储器空间的大小,进行迁移优先级的排序, 并选择本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的进程 迁移到所述目标 CPU; 由于占用存储器空间越小的进程, 其迁移优先级 就越高, 故而就可以优先迁移本地 CPU队列中占用存储器空间最小的进 程到目标 CPU, 从而减少迁移的进程在目标 CPU上执行时对远端节点存 储器的访问次数或数据的拷贝量。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对 实施例或现有技术描述中所需要使用的附图作筒单地介绍, 显而易见地, 下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员
来讲, 在不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的 附图。
图 1为实施例一提供的一种多处理器系统负载均衡的方法示意图; 图 2为实施例二提供的一种多处理器系统负载均衡的方法示意图; 图 3为实施例三提供的一种多处理器系统负载均衡的方法示意图; 图 4为实施例三提供的另一种多处理器系统负载均衡的方法示意图; 图 5为本发明实施例提供的一种多处理器系统负载均衡的装置框图; 图 6 为本发明实施例提供的另一种多处理器系统负载均衡的装置框 图。
具体实施方式
多处理器系统是指设有至少两个 CPU进行数据处理的系统。 例如, NUMA ( Non Uniform Memory Access Architecture, 非一致存者器访问架 构) 系统是一种多处理器系统, 且该系统的所有节点中的 CPU都可以访 问全部的系统物理存储器, 同时一个节点的 CPU访问其他不同节点存储 器的延时不一样。 本发明提供的方案可以适用于 NUMA构架的多处理器 系统。
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进 行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没 有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的 范围。
实施例一:
如图 1所示, 本发明实施例提供一种多处理器系统负载均衡的方法, 包括:
步骤 101、确定多处理器系统中的本地中央处理器 CPU和目标 CPU; 对于多处理器系统而言,每次时钟中断都会检测该多处理器系统中是 否存在负载不均衡的情况, 在多处理器系统存在负载不均衡的情况下, 需
要进行负载均衡处理。 所述负载均衡处理是指将进程从负载量大的 CPU 迁移到负载量小的 CPU上; 其中, 进程迁移出来的 CPU称为本地 CPU, 进程迁移进去的 CPU称为目标 CPU; —般而言, 负载量大的 CPU可以作 为本地 CPU, 负载量小的 CPU可以作为目标 CPU。
步骤 102、 根据所述本地 CPU队列中的进程占用存储器空间的大小, 进行迁移优先级的排序; 且占用存储器空间越少的进程, 其迁移优先级就 越高;
所述迁移优先级用于代表进程迁移的优先顺序; 也就是说, 一个进程 的迁移优先级越高, 则该进程就越优先被迁移。
此步骤可以为, 读取本地 CPU队列中的所有进程占用存储器空间的 大小, 根据每个进程占用存储器空间的大小进行迁移优先级的排序; 且占 用存储器空间越少的进程, 其迁移优先级就越高。 用如下方法: 对于本地队列中的任一进程, 通过查看其对应的页表属性, 可以得到其占用存储器空间的大小。
步骤 103、 将本地 CPU 队列中除了正在执行的进程外、 迁移优先级 最高的进程迁移到所述目标 CPU。
可以循环进行上述三个步骤,直至多处理器系统达到负载均衡或者接 近负载均衡或者该多处理器系统达到应用环境所要求的各个 CPU的负载 情况。
需要说明的是,上述各个步骤的执行主体为一种多处理器系统负载均 衡的装置, 该装置可以是一个基于多处理器系统的功能模块, 且该功能模 块可以在各个 CPU中, 也可以独立于各个 CPU而单独设置。
本发明实施例提供的多处理器系统负载均衡的方法, 将本地 CPU队 列中除了正在执行的进程外, 占用存储器空间最小的进程迁移到目标
CPU, 从而可以减少所迁移的进程在目标 CPU上执行时对远端节点存储 器的访问次数或数据的拷贝量。
实施例二:
考虑到在某些应用环境或场景下,多处理器系统需要满足所述某些应 用环境或场景的负载要求。本发明实施例提供的多处理器系统负载均衡的 方法, 根据多处理器系统在应用环境或场景下的负载要求, 预先对迁移优 先级设置阈值。
如图 2所示, 本发明实施例提供的多处理器系统负载均衡的方法, 包 括:
步骤 201、 确定多处理器系统中的本地 CPU和目标 CPU;
参考步骤 101。
步骤 202、 根据所述本地 CPU队列中的进程占用存储器空间的大小, 进行迁移优先级的排序; 且占用存储器空间越少的进程, 其迁移优先级就 越高;
此步骤的解释可以参考步骤 102; 另外, 所述迁移优先级可以用数字 表示, 即用数字的大小来表示迁移优先级的顺序; 在本实施例中, 数字越 大表示迁移优先级越高。 例如, 本地 CPU队列中有 9个进程, 则可以根 据这 9个进程占用存储器空间的大小,用 1~9来表示这 9个进程的迁移优 先级, 且迁移优先级为 9的进程为占用存储器空间最小的进程。 若 9个进 程中有两个进程占用存储器空间的大小相同,则这两个进程的迁移优先级 的次序可以颠倒; 例如, 若迁移优先级最高的进程占用存储器空间的大小 相同, 则这两个进程的迁移优先级分别记为 8和 9 , 但具体将这两个进程 中的哪个进程的迁移优先级记为 8、 哪个进程的迁移优先级记为 9 , 可以 为随机设定, 当然, 也可以通过其他判断手段进一步的区分这两个进程迁 移优先级的高低。
步骤 203、 将本地 CPU 队列中除了正在执行的进程外、 迁移优先级 最高的进程的迁移优先级和预设的阈值做比较;
根据多处理器系统的应用环境或场景, 为本地 CPU队列中进程的迁 移优先级预先设定一阈值, 即设定可迁移进程的迁移优先级的最大值。 经
过此步骤的比较,若所述迁移优先级最高的进程的迁移优先级大于预设的 阈值; 否则, 则进程不迁移。
例如本实施例本地 CPU队列中正在执行的进程的迁移优先级为 5且 预设的阈值为 7 , 根据此步骤比较迁移优先级最高的进程的迁移优先级 9 和预设的阈值 7 , 显然, 9大于 7 , 且迁移优先级为 9的进程不是正在执 行的进程, 故进行步骤 204。
步骤 204、 将本地 CPU 队列中除了正在执行的进程外、 迁移优先级 到目标 CPU。
例如, 若本地 CPU队列中迁移优先级最高的进程的迁移优先级为 9, 且该进程不是正在执行的进程, 且 9大于预设的阈值 7 , 则经过此步骤将 该进程迁移到目标 CPU; 若本地 CPU队列中迁移优先级为 9的进程正在 执行,则除了该进程外,迁移优先级最高的进程为迁移优先级为 8的进程, 由于 8大于预设的阈值 7 , 故将本地 CPU中迁移优先级为 8的进程迁移 到目标 CPU。在将本地 CPU中迁移优先级为 8的进程迁移到目标 CPU之 后, 判断该多处理系统是否负载均衡, 若负载均衡则结束, 若没有负载均 衡则循环进行步骤 201~204。
由于本发明实施例提供的方法根据多处理系统的应用环境或场景预 先对迁移优先级设定了阈值,故可以使得多处理系统在满足应用环境或场 景下各个 CPU的负载要求的条件下, 尽可能的达到负载均衡; 由于所迁 移的进程都是占用的存储器空间较小的进程,从而可以减少所迁移的进程 在目标 CPU上执行时对远端节点存储器的访问次数或数据的拷贝量。
实施例三:
由于进程的执行时间在一定程度上可以反映该进程占用存储器空间 的大小, 一般而言, 进程的执行时间和该进程占用存储器空间成正向的函 数关系, 也就是说, 进程的执行时间越长则表示该进程占用存储器空间就 越大。 所述进程的执行时间是指该进程从执行开始到执行结束的耗时。 基
于上述考虑, 如图 3所示, 本发明实施例提供另一种多处理器系统负载均 衡的方法。
步骤 301、确定多处理器系统中的本地中央处理器 CPU和目标 CPU; 参考步骤 101。
步骤 302、 计算本地 CPU队列中各进程的执行时间;
例如, 本地 CPU队列中有 9个进程, 则利用进程执行结束时刻减去 该进程执行开始时刻得到该进程执行时间的方法, 或者利用定时器的方 法, 计算这 9个进程的执行时间。
步骤 303、 比较各进程的执行时间的长短, 进行迁移优先级的排序; 且执行时间越短的进程, 其迁移优先级就越高;
本实施例中可以用数字的大小来表示迁移优先级的顺序。例如此步骤 可以为, 比较本地 CPU队列中 9个进程执行时间的长短, 用数字 1~9来 表示这 9个进程的迁移优先级, 即进行迁移优先级的排序。 执行时间越短 的进程, 其迁移优先级就越高; 将执行时间最短的进程的迁移优先级记为 9 , 将执行时间最长的进程的迁移优先级记为 1。
若在本地 CPU队列中的 9个进程的执行时间各不相同, 则继续进行 步骤 306。
若在本地 CPU队列中存在至少两个执行时间相同的进程, 则所述至 迁移优先级为 5、 6、 7的三个进程的执行时间相同, 则这三个进程的迁移 优先级可以随机记为 5、 6、 7。 当然, 还可以通过步骤 304~305对所述至 少两个执行时间相同的进程的优先级做进一步的判断。
步骤 304、 在本地 CPU队列中若存在至少两个执行时间相同的进程, 则计算所述至少两个执行时间相同的进程的等待时间;
进程的等待时间是指, 从该进程在本地 CPU 中最后一次执行结束的 时刻到进行此步骤的时刻所经过的时间;对于进程等待时间的计算可以用 进行此步骤的时刻减去该进程在本地 CPU 中最后一次执行结束的时刻的
方法, 或是利用定时器的方法, 当然, 也不排除利用其他可以得到进程等 待时间的方法。
例如, 若在本地 CPU中迁移优先级为 5、 6、 7的三个进程的执行时 间相同, 则分别计算三者的等待时间。
步骤 305、 比较所述至少两个执行时间相同的进程的等待时间的长 短, 进行迁移优先级的排序; 且等待时间越长的进程, 其迁移优先级就越 高。
例如, 比较本地 CPU中迁移优先级为 5、 6、 7的三个进程的等待时 间的长短, 进行迁移优先级的排序。 由于进程的等待时间越长, 说明距离 该进程最后一次执行结束的时间就越久, 其迁移优先级就越高; 相反, 进 程的等待时间越短, 说明距离该进程最后一次执行结束的时间就越短, 其 迁移优先级就越低。依据此排序方法, 将执行时间相同的进程进行迁移优 先级的排序。
由上述各步骤可以得到在本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的进程。 接着, 若只考虑将多处理器系统负载均衡, 则进 行步骤 3061 , 见图 3 ; 若考虑多处理器系统在应用环境或场景下的负载要 求, 则需要根据该负载要求预先对迁移优先级设置阈值, 进行步骤 3062 , 见图 4。
步骤 3061、 将本地 CPU队列中除了正在执行的进程外、 迁移优先级 最高的进程, 迁移到目标 CPU。
若此时多处理器系统负载得以均衡, 则结束; 若此时多处理器系统负 载仍不均衡, 则循环图 3所示的步骤 301~步骤 3061直至负载均衡。 的大小, 并将本地 CPU 中除了正在执行的进程外执行时间最短的进程迁 移至目标 CPU, 直至多处理器系统负载均衡, 从而可以减少所迁移的进 程在目标 CPU上执行时对远端节点存储器的访问次数或数据的拷贝量。
步骤 3062、 将本地 CPU队列中除了正在执行的进程外、 迁移优先级
最高的进程的迁移优先级和预设的阈值做比较,在该进程的迁移优先级大 于所述预设的阈值的情况下, 将其迁移到目标 CPU。
51、 将本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的 进程的迁移优先级和预设的阈值做比较;
52、 将本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的 CPU。
若此时多处理器系统负载得以均衡, 则结束; 若此时多处理器系统负 载仍不均衡, 则循环图 4所示的步骤 301~步骤 3062使得多处理器系统在 满足应用环境或场景的条件下, 尽可能的达到负载均衡。
本发明实施例考虑在多处理器系统的应用环境或场景下,尽可能的达 到多处理器系统的负载均衡; 由于所迁移的进程都是占用的存储器空间 较小的进程, 从而可以减少所迁移的进程在目标 CPU上执行时对远端节 点存储器的访问次数或数据的拷贝量。 本发明实施例还提供了与上述多处理器系统负载均衡的方法相对应 的装置, 如图 5所示, 该装置包括:
确定单元 51 , 用于确定多处理器系统中的本地中央处理器 CPU和目 标 CPU;
排序单元 52 , 用于根据所述本地 CPU队列中的进程占用存储器空间 的大小, 进行迁移优先级的排序; 且占用存储器空间越少的进程, 其迁移 优先级就越高;
迁移单元 53 , 用于将本地 CPU队列中除了正在执行的进程外、 迁移 优先级最高的进程迁移到所述目标 CPU。
本发明实施例提供的多处理器系统负载均衡的装置, 将本地 CPU队 列中除了正在执行的进程外, 占用存储器空间最小的进程迁移到目标 CPU, 从而可以减少所迁移的进程在目标 CPU上执行时对远端节点存储
器的访问次数或数据的拷贝量。
优选的, 所述排序单元 52包括:
第一计算子单元, 用于计算本地 CPU队列中各进程的执行时间; 第一比较子单元, 用于比较各进程的执行时间的长短, 进行迁移优先 级的排序; 且执行时间越短的进程, 其迁移优先级就越高。
若本地 CPU队列中存在至少两个执行时间相同的进程, 则所述排序 单元 52还可以包括:
第二计算子单元, 用于在本地 CPU队列中存在至少两个执行时间相 同的进程的情况下, 计算所述至少两个执行时间相同的进程的等待时间; 第二比较子单元,用于比较所述至少两个执行时间相同的进程的等待 时间的长短, 进行迁移优先级的排序; 且等待时间越长的进程, 其迁移优 先级就越高。
进一步的, 若考虑某些应用环境或场景对多处理器系统的要求, 则需 要预先对迁移优先级设置阈值。 如图 6所示, 上述装置还包括:
比较单元 54, 用于将本地 CPU队列中除了正在执行的进程外、 迁移 优先级最高的进程的迁移优先级和预设的阈值做比较,所述迁移优先级用 数字表示; 此时, 所述迁移单元 53具体用于将本地 CPU队列中除了正在 执行的进程外、 迁移优先级最高的进程, 在该进程的迁移优先级大于所述 预设的阈值的情况下, 迁移到所述目标 CPU。
通过设置比较单元 54 , 就可以使得多处理器系统在满足某些应用环 境或场景的负载要求的情况下, 尽可能的达到负载均衡; 由于所迁移的进 程都是占用的存储器空间较小的进程,从而可以减少所迁移的进程在目标 C P U上执行时对远端节点存储器的访问次数或数据的拷贝量。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到 本发明可借助软件加必需的通用硬件的方式来实现, 当然也可以通过硬 件, 但很多情况下前者是更佳的实施方式。 基于这样的理解, 本发明的技 术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式
体现出来, 该计算机软件产品存储在可读取的存储介质中, 如计算机的软 盘, 硬盘或光盘等, 包括若干指令用以使得一台计算机设备(可以是个人 计算机, 服务器, 或者网络设备等) 执行本发明各个实施例所述的方法。
以上所述, 仅为本发明的具体实施方式, 但本发明的保护范围并不局 限于此, 任何熟悉本技术领域的技术人员在本发明揭露的技术范围内, 可 轻易想到的变化或替换, 都应涵盖在本发明的保护范围之内。 因此, 本发 明的保护范围应以所述权利要求的保护范围为准。
Claims
1、 一种多处理器系统负载均衡的方法, 其特征在于, 包括:
确定多处理器系统中的本地中央处理器 CPU和目标 CPU;
根据所述本地 CPU队列中的进程占用存储器空间的大小, 进行迁移优 先级的排序; 且占用存储器空间越少的进程, 其迁移优先级就越高;
将本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的进程迁 移到所述目标 CPU。
2、 根据权利要求 1 所述的方法, 其特征在于, 还包括: 将本地 CPU 队列中除了正在执行的进程外、 迁移优先级最高的进程的迁移优先级和预 设的阈值做比较; 所述迁移优先级用数字表示, 且数字越大迁移优先级越 高;
所述将本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的进 程迁移到所述目标 CPU具体为:
将本地 CPU队列中除了正在执行的进程外、 迁移优先级最高的进程,
CPU。
3、 根据权利要求 1或 2所述的方法, 其特征在于, 所述根据所述本地 CPU队列中的进程占用存储器空间的大小, 进行迁移优先级的排序; 且占 用存储器空间越少的进程, 其迁移优先级就越高包括:
计算本地 CPU队列中各进程的执行时间;
比较各进程的执行时间的长短, 进行迁移优先级的排序; 且执行时间 越短的进程, 其迁移优先级就越高。
4、 根据权利要求 3所述的方法, 其特征在于, 所述根据所述本地 CPU 队列中的进程占用存储器空间的大小, 进行迁移优先级的排序; 且占用存 储器空间越少的进程, 其迁移优先级就越高还包括:
在本地 CPU队列中若存在至少两个执行时间相同的进程, 则计算所述 至少两个执行时间相同的进程的等待时间; 比较所述至少两个执行时间相同的进程的等待时间的长短, 进行迁移 优先级的排序; 且等待时间越长的进程, 其迁移优先级就越高。
5、 一种多处理器系统负载均衡的装置, 其特征在于, 包括:
确定单元, 用于确定多处理器系统中的本地中央处理器 CPU 和目标 CPU;
排序单元, 用于根据所述本地 CPU队列中的进程占用存储器空间的大 小, 进行迁移优先级的排序; 且占用存储器空间越少的进程, 其迁移优先 级就越高;
迁移单元, 用于将本地 CPU队列中除了正在执行的进程外、 迁移优先 级最高的进程迁移到所述目标 CPU。
6、 根据权利要求 5所述的装置, 其特征在于, 还包括:
比较单元, 用于将本地 CPU队列中除了正在执行的进程外、 迁移优先 级最高的进程的迁移优先级和预设的阈值做比较, 所述迁移优先级用数字 表示;
所述迁移单元具体用于将本地 CPU队列中除了正在执行的进程外、 迁 移优先级最高的进程, 在该进程的迁移优先级大于所述预设的阈值的情况 下, 迁移到所述目标 CPU。
7、根据权利要求 5或 6所述的装置,其特征在于,所述排序单元包括: 第一计算子单元, 用于计算本地 CPU队列中各进程的执行时间; 第一比较子单元, 用于比较各进程的执行时间的长短, 进行迁移优先 级的排序; 且执行时间越短的进程, 其迁移优先级就越高。
8、 根据权利要求 7所述的装置, 其特征在于, 所述排序单元还包括: 第二计算子单元, 用于在本地 C P U队列中存在至少两个执行时间相同 的进程的情况下, 计算所述至少两个执行时间相同的进程的等待时间; 第二比较子单元, 用于比较所述至少两个执行时间相同的进程的等待 时间的长短, 进行迁移优先级的排序; 且等待时间越长的进程, 其迁移优 先级就越高。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/072913 WO2011103825A2 (zh) | 2011-04-18 | 2011-04-18 | 多处理器系统负载均衡的方法和装置 |
EP11746867.8A EP2437168B1 (en) | 2011-04-18 | 2011-04-18 | Method and device for balancing load of multiprocessor system |
CN201180000363.XA CN102834807B (zh) | 2011-04-18 | 2011-04-18 | 多处理器系统负载均衡的方法和装置 |
US13/340,352 US8739167B2 (en) | 2011-04-18 | 2011-12-29 | Method and device for balancing load of multiprocessor system by sequencing migration priorities based on memory size and calculated execution time |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2011/072913 WO2011103825A2 (zh) | 2011-04-18 | 2011-04-18 | 多处理器系统负载均衡的方法和装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/340,352 Continuation US8739167B2 (en) | 2011-04-18 | 2011-12-29 | Method and device for balancing load of multiprocessor system by sequencing migration priorities based on memory size and calculated execution time |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011103825A2 true WO2011103825A2 (zh) | 2011-09-01 |
WO2011103825A3 WO2011103825A3 (zh) | 2012-03-15 |
Family
ID=44507270
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2011/072913 WO2011103825A2 (zh) | 2011-04-18 | 2011-04-18 | 多处理器系统负载均衡的方法和装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8739167B2 (zh) |
EP (1) | EP2437168B1 (zh) |
CN (1) | CN102834807B (zh) |
WO (1) | WO2011103825A2 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103117923A (zh) * | 2013-01-18 | 2013-05-22 | 杭州华三通信技术有限公司 | 一种进程管理方法和设备 |
CN103164321A (zh) * | 2013-03-20 | 2013-06-19 | 华为技术有限公司 | 中央处理器占用率测量方法及装置 |
US8739167B2 (en) | 2011-04-18 | 2014-05-27 | Huawei Technologies Co., Ltd. | Method and device for balancing load of multiprocessor system by sequencing migration priorities based on memory size and calculated execution time |
CN104335175A (zh) * | 2012-06-29 | 2015-02-04 | 英特尔公司 | 基于系统性能度量在系统节点之间标识和迁移线程的方法和系统 |
CN105959820A (zh) * | 2016-06-06 | 2016-09-21 | 汪栋 | 一种利用分布式计算实现重度游戏在智能电视终端设备呈现的方法及系统 |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8984526B2 (en) * | 2012-03-09 | 2015-03-17 | Microsoft Technology Licensing, Llc | Dynamic processor mapping for virtual machine network traffic queues |
US9390020B2 (en) | 2012-07-06 | 2016-07-12 | Seagate Technology Llc | Hybrid memory with associative cache |
US9772948B2 (en) | 2012-07-06 | 2017-09-26 | Seagate Technology Llc | Determining a criterion for movement of data from a primary cache to a secondary cache |
US9594685B2 (en) | 2012-07-06 | 2017-03-14 | Seagate Technology Llc | Criteria for selection of data for a secondary cache |
US9477591B2 (en) * | 2012-07-06 | 2016-10-25 | Seagate Technology Llc | Memory access requests in hybrid memory system |
US9529724B2 (en) | 2012-07-06 | 2016-12-27 | Seagate Technology Llc | Layered architecture for hybrid controller |
US9104578B2 (en) | 2012-07-06 | 2015-08-11 | Seagate Technology Llc | Defining address ranges used to cache speculative read data |
US9342366B2 (en) * | 2012-10-17 | 2016-05-17 | Electronics And Telecommunications Research Institute | Intrusion detection apparatus and method using load balancer responsive to traffic conditions between central processing unit and graphics processing unit |
US9336057B2 (en) * | 2012-12-21 | 2016-05-10 | Microsoft Technology Licensing, Llc | Assigning jobs to heterogeneous processing modules |
US20140282584A1 (en) * | 2013-03-14 | 2014-09-18 | Silicon Graphics International Corp. | Allocating Accelerators to Threads in a High Performance Computing System |
US9785564B2 (en) | 2013-08-20 | 2017-10-10 | Seagate Technology Llc | Hybrid memory with associative cache |
US9367247B2 (en) | 2013-08-20 | 2016-06-14 | Seagate Technology Llc | Memory access requests in hybrid memory system |
US9507719B2 (en) | 2013-08-20 | 2016-11-29 | Seagate Technology Llc | Garbage collection in hybrid memory system |
US9875185B2 (en) * | 2014-07-09 | 2018-01-23 | Intel Corporation | Memory sequencing with coherent and non-coherent sub-systems |
CN104156322B (zh) * | 2014-08-05 | 2017-10-17 | 华为技术有限公司 | 一种缓存管理方法及缓存管理装置 |
CN105468538B (zh) * | 2014-09-12 | 2018-11-06 | 华为技术有限公司 | 一种内存迁移方法及设备 |
CN105808443B (zh) * | 2014-12-29 | 2019-01-18 | 华为技术有限公司 | 一种数据迁移的方法、装置及系统 |
CN105204938B (zh) * | 2015-11-02 | 2019-01-11 | 重庆大学 | 一种内存访问的数据密集型进程调度方法 |
CN106020971B (zh) * | 2016-05-10 | 2020-01-31 | 广东睿江云计算股份有限公司 | 云主机系统中的cpu调度方法及装置 |
CN106020979B (zh) * | 2016-05-17 | 2019-05-31 | 青岛海信移动通信技术股份有限公司 | 进程的调度方法及装置 |
CN107844370B (zh) * | 2016-09-19 | 2020-04-17 | 杭州海康威视数字技术股份有限公司 | 一种实时任务调度方法及装置 |
CN107168778B (zh) * | 2017-03-30 | 2021-01-15 | 联想(北京)有限公司 | 一种任务处理方法及任务处理装置 |
CN108549574B (zh) * | 2018-03-12 | 2022-03-15 | 深圳市万普拉斯科技有限公司 | 线程调度管理方法、装置、计算机设备和存储介质 |
CN112698934B (zh) * | 2019-10-22 | 2023-12-15 | 华为技术有限公司 | 资源调度方法和装置、pmd调度装置、电子设备、存储介质 |
CN110928661B (zh) * | 2019-11-22 | 2023-06-16 | 北京浪潮数据技术有限公司 | 一种线程迁移方法、装置、设备及可读存储介质 |
CN111597054B (zh) * | 2020-07-24 | 2020-12-04 | 北京卡普拉科技有限公司 | 一种信息处理方法、系统、电子设备及存储介质 |
CN114071046A (zh) * | 2020-07-31 | 2022-02-18 | 上海华博信息服务有限公司 | 一种特种影片转制服务平台 |
CN112559176B (zh) * | 2020-12-11 | 2024-07-19 | 广州橙行智动汽车科技有限公司 | 一种指令处理方法和装置 |
CN113254186A (zh) * | 2021-06-15 | 2021-08-13 | 阿里云计算有限公司 | 一种进程调度方法、调度器及存储介质 |
CN113326140A (zh) * | 2021-06-30 | 2021-08-31 | 统信软件技术有限公司 | 一种进程迁移方法、装置、计算设备以及存储介质 |
CN113688053B (zh) * | 2021-09-01 | 2023-07-28 | 北京计算机技术及应用研究所 | 云化测试工具的排队使用方法和系统 |
CN113553164B (zh) * | 2021-09-17 | 2022-02-25 | 统信软件技术有限公司 | 一种进程迁移方法、计算设备及存储介质 |
CN113918527B (zh) * | 2021-12-15 | 2022-04-12 | 西安统信软件技术有限公司 | 一种基于文件缓存的调度方法、装置与计算设备 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5506987A (en) * | 1991-02-01 | 1996-04-09 | Digital Equipment Corporation | Affinity scheduling of processes on symmetric multiprocessing systems |
US7065766B2 (en) * | 2002-07-11 | 2006-06-20 | International Business Machines Corporation | Apparatus and method for load balancing of fixed priority threads in a multiple run queue environment |
US7565653B2 (en) | 2004-02-20 | 2009-07-21 | Sony Computer Entertainment Inc. | Methods and apparatus for processor task migration in a multi-processor system |
US8296615B2 (en) * | 2006-11-17 | 2012-10-23 | Infosys Limited | System and method for generating data migration plan |
KR20090005921A (ko) * | 2007-07-10 | 2009-01-14 | 삼성전자주식회사 | 대칭적 다중 프로세서 시스템에서의 로드 밸런싱 방법 및장치 |
US8627325B2 (en) * | 2008-01-03 | 2014-01-07 | Hewlett-Packard Development Company, L.P. | Scheduling memory usage of a workload |
CN101446910B (zh) * | 2008-12-08 | 2011-06-22 | 哈尔滨工程大学 | 基于smp的高级最早期限优先算法任务调度方法 |
CN101887383B (zh) * | 2010-06-30 | 2013-08-21 | 中山大学 | 一种进程实时调度方法 |
EP2437168B1 (en) | 2011-04-18 | 2023-09-06 | Huawei Technologies Co., Ltd. | Method and device for balancing load of multiprocessor system |
-
2011
- 2011-04-18 EP EP11746867.8A patent/EP2437168B1/en active Active
- 2011-04-18 WO PCT/CN2011/072913 patent/WO2011103825A2/zh active Application Filing
- 2011-04-18 CN CN201180000363.XA patent/CN102834807B/zh active Active
- 2011-12-29 US US13/340,352 patent/US8739167B2/en active Active
Non-Patent Citations (1)
Title |
---|
See references of EP2437168A4 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8739167B2 (en) | 2011-04-18 | 2014-05-27 | Huawei Technologies Co., Ltd. | Method and device for balancing load of multiprocessor system by sequencing migration priorities based on memory size and calculated execution time |
CN104335175A (zh) * | 2012-06-29 | 2015-02-04 | 英特尔公司 | 基于系统性能度量在系统节点之间标识和迁移线程的方法和系统 |
US9952905B2 (en) | 2012-06-29 | 2018-04-24 | Intel Corporation | Methods and systems to identify and migrate threads among system nodes based on system performance metrics |
CN103117923A (zh) * | 2013-01-18 | 2013-05-22 | 杭州华三通信技术有限公司 | 一种进程管理方法和设备 |
CN103164321A (zh) * | 2013-03-20 | 2013-06-19 | 华为技术有限公司 | 中央处理器占用率测量方法及装置 |
CN105959820A (zh) * | 2016-06-06 | 2016-09-21 | 汪栋 | 一种利用分布式计算实现重度游戏在智能电视终端设备呈现的方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN102834807A (zh) | 2012-12-19 |
WO2011103825A3 (zh) | 2012-03-15 |
US8739167B2 (en) | 2014-05-27 |
EP2437168A2 (en) | 2012-04-04 |
CN102834807B (zh) | 2015-09-09 |
US20120266175A1 (en) | 2012-10-18 |
EP2437168A4 (en) | 2012-08-29 |
EP2437168B1 (en) | 2023-09-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011103825A2 (zh) | 多处理器系统负载均衡的方法和装置 | |
WO2017166777A1 (zh) | 一种任务调度方法及装置 | |
JP4963018B2 (ja) | スケジューリング方法およびスケジューリング装置 | |
US9588810B2 (en) | Parallelism-aware memory request scheduling in shared memory controllers | |
US8453150B2 (en) | Multithread application-aware memory scheduling scheme for multi-core processors | |
US20140059554A1 (en) | Process grouping for improved cache and memory affinity | |
US11340945B2 (en) | Memory congestion aware NUMA management | |
JP6260303B2 (ja) | 演算処理装置及び演算処理装置の制御方法 | |
TW201715381A (zh) | 用於在存在衝突時進行高效任務排程的方法 | |
US9547528B1 (en) | Pizza scheduler | |
JP6296678B2 (ja) | ソフトリアルタイムオペレーティングシステムの実時間性を確保する方法及び装置 | |
JP7246308B2 (ja) | デュアルモードローカルデータストア | |
US20160253216A1 (en) | Ordering schemes for network and storage i/o requests for minimizing workload idle time and inter-workload interference | |
WO2018036104A1 (zh) | 一种布署虚拟机的方法、系统以及物理服务器 | |
US9489295B2 (en) | Information processing apparatus and method | |
CN113326140A (zh) | 一种进程迁移方法、装置、计算设备以及存储介质 | |
US20140189329A1 (en) | Cooperative thread array granularity context switch during trap handling | |
US20200019434A1 (en) | Data Flow Control in a Parallel Processing System | |
JPWO2012101759A1 (ja) | プロセッサ処理方法、およびプロセッサシステム | |
JP6206524B2 (ja) | データ転送装置、データ転送方法、プログラム | |
JP5847313B2 (ja) | 情報処理装置 | |
JP2019159750A (ja) | データ転送装置、データ転送方法、プログラム | |
JP2019159751A (ja) | データ転送装置、データ転送方法、プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180000363.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11746867 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011746867 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |